Whether you’re building software, hardware, or connecting the two together through the internet of things (IoT), you don’t know anything about your product until it’s in the hands of the users.
Before I transitioned into development, I began my career in embedded software testing and quality assurance, optimizing and maintaining the Python testing automation framework at Johnson Controls.
Now, at Very, I get to build IoT software that solves problems for our clients — and nowhere do I see a greater need for a solid test strategy than when I’m dealing with IoT devices and applications.
Why User Testing is Critical for IoT
For one of my favorite IoT device projects Very has put together, Very worked with Hop to build the world's first self-serve beer kiosk powered by facial recognition technology. We delivered a product that created a 100% increase in daily revenue — but had we not conducted usability tests, the results could have been mildly disastrous.
To operate the beer tap at the kiosk, we designed and manufactured a “smart” valve to control the flow of beer. During an early manufacturing run, the lines that pumped beer to the tap got too cold, creating ice chips and freezing the valves open. The result? Bill Brock, our VP of engineering, got sprayed in the face with a lot of beer.
Don’t get me wrong — Bill enjoys a good beer or party as much as the next person, but he does think that the beer tap should eventually stop. Ideally, it stops after pouring beer into your glass, not your face.
If we had skipped the crucial step of testing functionality in real-world conditions, a lot of unsuspecting customers could have had a much more intense experience than they anticipated.
What’s the Goal of User Testing?
While we identified a huge bug during testing in the above example, it’s important to note that when we’re talking about user testing for IoT, we’re not just talking about finding bugs. Finding bugs is definitely an outcome of testing, but it’s not the goal.
Instead, the goal of user testing is to get information about what people experience when they use your product.
The person testing your IoT device is an information fountain. They’re someone who uses your product and describes what happens when they use it to the people who care about it.
The tester finds bugs when the qualities of the system offend the sensibilities of the tester, or the tester understands the behavior or qualities don’t match up with the expectations of the project stakeholders.
In the Hop case, Bill was offended by getting an unexpected face full of beer, and he knew that Hop wouldn’t want to similarly offend customers with the same experience.
Traditional Software Testing vs. IoT Testing
Other than classic human error, where do bugs come from? For traditional web applications, you have a database, a back end, and a front end. And every single time you have those things interact, you’re likely to find some pretty fun bugs. (That is, if your definition of “fun” is solving complex problems.)
When I was a tester, I found so many things in integration or end-to-end testing that nobody working on any of those systems saw coming.
In IoT systems, you're adding even more layers. Now, instead of having just a database, a back end, and a front end, you might have:
- A sensor that talks to a hub
- A hub that talks to a cloud-hosted back end
- A backend that, in turn, talks to a database and a front end application.
At a previous job, we referred to this setup as the “feature interaction matrix.” In IoT, that feature interaction matrix explodes. And, like lifting a rock in your garden, that's where you find all the bugs. To minimize bugs, you have to lift up those rocks early and often through performance testing.
There's an image that shows up in every single integration testing blog I've ever seen. And I’m going to use it here because it just works.
This image is typically labeled, “Two unit tests, zero integration tests.” The designer or manufacturer has proved that each individual window opens, but they've never been proven to work together. Your testing approach, especially for IoT, should focus heavily on integration.
Why Testing in the Lab is Not Enough for IoT
Bill got sprayed in the face with beer in the above example because, during the manufacturing stage, we made assumptions about the conditions in which our beer tap would be operating. We assumed that:
- The tap lines would always be the appropriate temperature.
- No solid matter would ever be flowing through the tap lines.
When it came time to test, we discovered that things we assumed would be true were not necessarily reality when our device was out in the wild.
There’s a lot of stuff that’s out of your control and hard to predict in IoT — when your products hit the real world.
My master’s degree is in Distributed Systems, and one of the things you learn while getting that degree is the “Eight Fallacies of Distributed Computing” — eight things that you assume are true, but aren’t always. They are:
- The network is reliable.
- Latency is zero.
- Bandwidth is infinite.
- The network is secure.
- Topology doesn't change.
- There is one administrator.
- Transport cost is zero.
- The network is homogeneous.
The lab is not reality. If you want products that work in the real world, you have to test them outside of the lab. That’s where all the things you assumed would be true start to break down.
Top Tips for IoT Testing
1. Don’t do your own user testing.
If you’re the one building a product, you shouldn’t be the one testing it. As a software engineer myself, I never test my own software because I know that I’ll bring the same assumptions into functionality testing that I brought into writing the software.
2. Get the product in the client’s hands as soon as possible.
You can’t start testing your product until you actually have something ready to test. While having a plan and a strategy is crucial, you’re not going to learn anything new until your product is in the hands of the users.
Don’t be scared of a breadboard. Try the hardware configuration out and say, “All right, I'm going to try this.” Get the prototype to your client and say: “Here, is this even close to correct?” Because before you have something in front of you, it's really hard to talk concretely about the system, the qualities of the system, and the value it provides.
3. Think about all the ways you might fail.
When I’m building something, I challenge myself to consider at least three ways that the approach I’ve chosen might fail. If I haven’t done so, then I haven’t thought about it enough.
- What happens when this sensor is jammed?
- What happens if I try to send solid matter through a beer tap that’s supposed to only hold liquids?
- Will a firmware update brick my smart shoe?
4. Use the right IoT testing tools.
Some of the tools that help us build and test products at Very include:
Elixir and Nerves: We like to use Elixir and Nerves to build applications. One of the many reasons is because it’s fault-tolerant. If an Elixir process fails, the failure is contained to that process and that process only. The entire system won’t go down if one user experiences a problem.
Docker, Jumper, and Circle CI: If it’s possible, I recommend automating checks around things like safety. We've made great use of Docker, Jumper, and CircleCI to run a test suite against a particular piece of firmware.
Amazon Mechanical Turk: We’ve used this service to conduct surveys vetting different physical designs and even colors for user interfaces and hardware.
You’re Not Done Developing Until You’ve Finished User Testing
To sum up, you’re not finished developing your software and/or IoT device until you’ve challenged all your initial assumptions through robust user testing.
If you’re looking for a development partner who understands why user testing is critical to your IoT project’s success, that’s us. Connect with one of our experts today and we’ll talk about bringing your idea to life.