Is your software testing moving as fast as you’d like it to go? Do you feel confident that your next product release will be bug-free and delight users? If your answer is not “absolutely,” it’s possible you have a software testing problem, and hiring more testers is likely not the answer.
Are You Making the Most of the Software Testers You Already Have?
Separation of software developers and testers is a traditional practice during the software development life cycle that’s common in many large organizations. And that separation makes sense; making sure that specialists are working on their designated focus areas can be great for efficiency. Developers do the software engineering, while testers do the software testing, without getting too technical or in the weeds.
When you have this mindset, it’s easy to then think that if your testing isn’t going fast enough, it’s because you don’t have enough testers on your team. Since testers aren’t supposed to be very technical anyway, leveling up their skills won’t solve the problem, right?
As a former tester — who later evolved out of that position into a senior software engineer — I have to disagree. I’d like to argue that pouring more testers onto your testing problem likely won’t solve it, and definitely won’t help you make the most of the staff and resources you already have.
Instead, I propose that by allowing your testers to be more technical, and by giving them more freedom in their day-to-day work, your testing will become more efficient, and you’ll be more likely to catch project-stalling, revenue-stream-stopping bugs.
Here’s a little backstory to illustrate:
I began my career in software as a tester in the electronics division for a multinational equipment manufacturing conglomerate. I was on a team of about 17 testers (seven when I initially joined) in a division called Software Systems Test Engineering (SSTE).
One of our team’s basic functions was that every time there was a software release, we ran a sanity test that exercised the basic functionality of the software to detect bugs. This manual test worked okay, but a colleague and I saw a big opportunity for improved productivity and more efficient management of resources. We thought,
“What if we could just create a testing tool to run this test for us, and use the time we get back to add value in other ways?”
So, my teammate started developing a test automation framework to replace the manual testing. I, meanwhile, would write tests for his automation framework, which helped me to learn Python — my first programming language.
This effort had pretty great results. We were able to automate the entire sanity test. Instead of the test taking 17 people one full day to complete (~136 hours), we would run the automated version on a few different machines, so it would take three computers an hour and a half. We could go out for lunch, come back, and the tests would be done.
Had no one on our team had the technical capacity or the freedom to look at these problems in new ways, that team still might be running the sanity test with 17 or more people.
So step one in evaluating your testing problem? Before you hire more testers, invest in the ones you have. Get them thinking about test automation by writing test cases for an automated framework. Get them thinking about how they can help your team operate more efficiently and experiment with other types of software testing.
Is Your Software Testing Strategy Sending You Down The Same Paths Again and Again?
Once you start automating some of your tests, your more technical testers won’t be sitting on their hands or looking for new jobs. Instead, they’ll have more freedom to think about new ways to optimize.
After we automated the sanity test, for example, we had the time and the brain capacity to step back and notice something: the test kept finding the same kinds of issues each time we ran it. Soon, we realized that the problem was not with the test itself, or our automation, but our testing strategy. We were running tests on parts of the product that hadn’t been updated since the last test, so naturally, the system continued to throw the same errors and we reported the same bugs.
We realized that there was no clear path to resolution when a bug was detected, so we started having software testers “buddy-up” with software developers when bugs were found. We’d sit down together and show the developers what happened when different actions were taken, and why those outcomes were not ideal for the users so that the developers could understand the issue better and adjust more quickly.
Speaking of the users — where do they fit into this picture? All too often, software testing is done by a team that hasn’t had sufficient exposure to the customers or users of the product. If you don’t know what your target audience cares about, how can you know what will work for them and what won’t?
In addition to catching bugs like, “the program crashes when I hit this button,” we also started expanding our test scenarios by having our testers look for things that our clients cared about in the software. That started with education on our customers and receiving feedback from them. Then, testers could also start identifying problems like, “this metric won’t be meaningful for these users,” to improve the quality of the software. In essence, we began testing the requirements, documentation, and design.
Instead of saying, "This didn't work when it should have worked," the tester started asking, "Should this be here at all?"
This was the point when we really started to see substantial improvements in our product and our testing — when the testers started becoming embedded in the development process and more connected with the customer.
What is Your Team’s Culture?
Pairing testers with software developers, in addition to helping fix individual bugs, was a great practice for our software team as a whole because it helped develop non-adversarial relationships between testers and developers.
Traditionally, there’s a tension between developers and testers that may be inevitable if you don’t actively work against it. Developers do not want to get dinged by testers in the development process because they want to be known for delivering high quality, but testers want to push the limits and assumptions of the system because, well, that's their job.
Testers, especially, don’t want the blame that comes with a serious bug making it out the door, and that could also be a major culprit for why your testing is taking so long. If all the blame will rest on one team’s shoulders for an error, that team probably wants to make darn sure that they test and retest every piece of the product they can touch.
By building partnerships between testers and developers, however, we found that the quality of our developers’ source code increased and our testers caught more issues and finished more quickly. If a bug did still make it into the finished product, we spent less time casting blame and more time putting new plans in place to keep it from happening again. Quality assurance suddenly became about actually assuring quality, instead of blaming each other for it not being so.
Your Software Testing Treatment Plan
If from reading this post you’ve diagnosed yourself with a software testing problem, my biggest piece of advice for treatment is this: empower your testers and your teams. When you start with that idea in mind, you’re more likely to make decisions that positively impact the quality of your software and the health of your team.
Looking for an IoT development firm to guide you through all aspects of your project, from conception to testing to launch? Tell us more about what you’re working on today.