The Journey to Autonomous Software Testing (Part 2)
The emergence of the Internet has much to do with the testing innovations. But what's next? AI! Let's apply AI to the Marick Test Matrix and then IMAGINE.
The Internet and the Evolution of the Software Business Model
In Part 1 of this post, we discussed the evolution of testing from the first bug to the different testing schools. This post provides a quick review of where we have been and what we have done. The, we will focus on what is coming next.
One of the reasons behind the progress in the last 20+ years is the technological progress underpinning it. That’s thanks to the emergence of the internet. For the kids, they used to call it the information superhighway. You can check it out, including a reference to its creator, VP Al Gore (just joking).
Let’s get back to the main point: the arrival of the Internet and, eventually, cloud computing. All those technologies made it possible to completely change the software business model and how we work and test it.
Those technologies and the emergence of artificial intelligence (AI) create the opportunity we now call autonomous software testing. Why do we need this capability? The testing problem is still present; it has not gone away. The quality scope keeps increasing, as we discussed in a previous post.
In a blog post from last year, Testaify, System Thinkers, and Exploratory Testing, we hinted at this new age in quality assurance. We said: “AI workers can generate exponentially more tests, accelerating the time to find defects and making measurable quality enhancements possible!”
What did we mean by that? Well, let’s go to my favorite matrix, Marick’s four quadrants (you are probably tired of my constant references to it):
At the top of the matrix, you have Business Facing, which refers to tests that deal with the business domain. Any business expert will understand these tests. At the matrix’s bottom, you have Technology Facing. Per the name, the focus here is on the technology or the verification of the technical implementation. Support Programming is at the left side of the matrix. This side includes the efforts that support the development of the specific product. On the right side, you have Critique Product. These are the tests trying to break things (the critique). Altogether, this matrix forms four distinct quadrants.
The problem is obvious. How do you implement all four quadrants? Let’s focus on quadrant four for a moment. It includes performance (a big umbrella on its own), security, and accessibility testing. The focus of this quadrant is to see if the technical implementation meets the standards for a successful product. Do you have enough time and people to cover all these aspects? Today's answer is mostly no unless you have a budget as large as Apple, Google, or any big tech company. However, as it usually happens, technology brings opportunities to make specific capabilities available to organizations with smaller budgets.
Like John Lennon, let’s imagine a world where you can create and destroy test environments as needed. We have that. Now, let’s add the capability to run tests against those environments anytime during the day. We have that, too. Finally, we need someone or something that can design and create the test data and run those tests in minutes instead of days or weeks. Do we have it? Wait, maybe we do have it.
Imagine once again a world where you can tell an AI system, “I want you to discover, design, create test data, and execute the tests on your own. I want you to do the work and report your findings.” Not only that, imagine a world where you can have one AI system doing functional testing, another doing performance testing, another doing usability testing, etc. You can tell the AI system to generate at least 10,000 functional tests if that is what you want. For the first time, you can implement a continuous comprehensive testing strategy.
This new era is the age of autonomous testing via AI/ML. It is a world where your team has people and AI workers working together to create software products quickly with a higher quality standard than you have ever seen before.
Imagine!
Did you miss part 1 of this post? Find it here: There's Four, No, Five Schools of Testing
About the Author
Testaify founder and COO Rafael E. Santos is a Stevie Award winner whose decades-long career includes strategic technology and product leadership roles. Rafael's goal for Testaify is to deliver comprehensive testing through Testaify's AI-first platform, which will change testing forever. Before Testaify, Rafael held executive positions at organizations like Ultimate Software and Trimble eBuilder.
Take the Next Step
Join the waitlist to be among the first to know when you can bring Testaify into your testing process.