The History of Artificial Intelligence in Software Testing
Exploring the history of the types of artificial intelligence testing solutions reveals weaknesses and opportunities. Learn how next-gen AI has evolved.
TABLE OF CONTENTS
- What is artificial intelligence?
- Part 1: The Early Days of Artificial Intelligence in Software Testing
- Part II, Trying to Deal with the Enormous Challenge of Bringing Artificial Intelligence to the Software Testing Market
- Part 3, The Path to Autonomous Testing
What is artificial intelligence?
As we begin this analysis of artificial intelligence in software testing, we must first answer the question, what is artificial intelligence?
Artificial intelligence describes the simulation of human intelligence in computers designed to think and learn like humans. Machines with artificial intelligence use complex architectures and mathematical models to analyze incredible volumes of data, learn from patterns, and make decisions or predictions. Artificial intelligence can be used to solve problems, perform repetitive tasks, interpret data, respond to prompts in conversation, and more. At Testaify, we’re using artificial intelligence to automate the testing process, including having artificial intelligence discover web apps, generate software test cases with realistic input data, run those tests, and report back.
Part 1: The Early Days of Artificial Intelligence in Software Testing
When discussing the unique aspects of our Testaify product compared to others in the market, it's essential to highlight its innovative and distinctive features. These features are not just unique, but they also carry the potential to inspire a new era in the artificial intelligence software testing market, sparking optimism and excitement about the future of software testing.
Understanding the unique categories within the vast software testing market is crucial. In this context, the artificial intelligence software testing market is not just significant; it's the future of software testing. Tools with artificial intelligence capabilities are expected to adapt and survive, making it a market that demands our attention.
I did some research to see if someone had done something similar. While different from what I was thinking, a blog post by a testing services company does a decent job covering some of the players in the market. Here is the blog post I am referring to.
This blog post indirectly covers some of the categories we see in the market and discusses some of the problems these tools are trying to solve. In our post, we will discuss each group's core approaches, the problems they are trying to solve, and the issues with each one.
As outlined in a previous blog post, the fundamental challenge regarding testing is the vast number of combinations required to test a product exhaustively.
The industry created GUI test automation tools to help with this issue. It tried to sell them by suggesting they were easy to use: You just need to record and playback. By the way, “record and playback” is one of the most hated terms in software testing. It does not work. To use a GUI test automation tool, you have to write code. It took many years and different patterns before the industry became effective at creating useful test scripts.
The advent of these tools created a new job: test automation developer or engineer. In other words, someone has to write these scripts and maintain them. Fragile test automation scripts became the shared pain of most testing groups.
Artificial Intelligence Augmented Tools: Test Automation Helpers
One of the earliest attempts to incorporate artificial intelligence in software testing was the emergence of 'Test Automation Helpers.’ This group was primarily focused on addressing the issue of fragile test automation scripts.
The first generation of these artificial intelligence tools implemented self-healing test automation. Gartner describes self-healing as follows:
If a test fails at runtime, AI-augmented tools can explore alternative ways to find the faulty component or information and then fix the broken test with the updated information.
Source: Gartner. (2022, November 28). Market Guide for AI-Augmented Software-Testing Tools.
They focused on freeing test automation engineers from constantly fixing and manually re-running test automation scripts. Some examples are Functionize, mabl, Leapwork, and Testim. Most of these first-generation artificial intelligence testing tool companies started in the mid-2010s. Some have been acquired, like Testim.
Today, self-healing is a capability that all artificial intelligence software testing tools must have. Many of these companies continue to evolve, add other capabilities, and expand their support to different aspects of the software testing process.
As you can see, their focus was on resolving the test automation execution problem. From our perspective, these tools are trying to fix the wrong problem. Instead of focusing on the core testing problem, they focus on improving the status quo fractionally. At best, the impact will be small.
Artificial Intelligence Augmented Tools: Visual Testing
While most vendors took the self-healing path, one company became synonymous with visual testing: Applitools.
According to Gartner,
Although an application may technically function, it may not render correctly in all instances. Thus, testers need the ability to rapidly perform accurate visual tests across a wide range of OS versions, browsers, and devices, especially for consumer-grade applications. AI can augment visual testing by using a variety of image recognition techniques that replicate a human looking at screens and comparing them. Leading visual tools can also aid with testing for compliance accessibility standards.
Source: Gartner. (2022, November 28). Market Guide for AI-Augmented Software-Testing Tools.
Unsurprisingly, visual testing emerged as an area where artificial intelligence can help. One particular area of early progress in artificial intelligence was computer vision. While this approach helps address specific problems, more is needed to help with the core issue around software testing. In essence, you are comparing images to see if something is different. That is a long way from knowing if an application works. Visual testing is a dumb approach to software testing. There is no intelligence behind it except to say these two pictures are different.
Visual testing has become a standard practice for mobile developers that must support many different devices. It is a standard capability on many of the artificial intelligence testing platforms in the market.
Artificial Intelligence Augmented Tools: Quality Analytics
A small group of vendors focuses on capturing data about testing to help predict potential problems, prioritize test cases (test selection), or provide trending information about releases. Analytics are essential to continuously improving the quality of your product. Sealights is an example of such a solution. Their product offers features like quality risk insights and test impact analytics. Other vendors like ACCELQ provide test selection features that optimize the test suite by removing duplicate tests or findings. Katalon provides test failure analysis by evaluating findings from previous test runs.
Analytics is an essential feature that most artificial intelligence testing tool vendors provide one way or another. It is a standard expectation on artificial intelligence software testing platforms.
Impact of Early Artificial Intelligence Testing Tools
Early artificial intelligence testing tools focus on improving the status quo by automating niche testing practices (visual testing) or helping accelerate test automation execution (self-healing). The three areas we covered are converging. Instead of becoming sole self-healing test automation vendors, vendors are adding visual testing features to their product suites and expanding their analytics capabilities.
We also see vendors expanding their offerings to other testing specialties, such as performance, accessibility, and security, to provide comprehensive testing.
In the next section, we will move to the next generation. Specifically, we will discuss the efforts to achieve artificial intelligence test design.
Part II, Trying to Deal with the Enormous Challenge of Bringing Artificial Intelligence to the Software Testing Market
In this section, we continue reviewing the artificial intelligence software testing market. We will discuss the different efforts to use artificial intelligence for testing and data generation. In other words, how to achieve artificial intelligence-driven test design.
We see a lot of diversity in test design and generation. Several different approaches exist simultaneously. To design test cases, you need to understand what you are testing. In other words, you must discover the application under test (AUT). Here are the most common approaches:
Model Authoring
Before generating tests, you need a model of your application under test. One approach we often see is to provide a low-code or no-code designer to build the model that describes the application under test. The provider requires a DSL (Domain Specific Language) combined usually with Javascript, which you must learn to model your application under test. Companies like Appvance use this approach.
The problem with this approach is that you now have something new to maintain manually: the model. As the bottleneck moves before design, the whole process slows down.
User Journey and Log Monitoring
This approach depends on tracking users' activity in an environment and is sometimes called user journey-based. Usually, user journey tracking goes hand in hand with monitoring the log files to enhance the model. Companies like Keysight, ProdPerfect, mabl, Qualiti, and others used this approach.
As you can imagine, this approach has a foundational problem. It depends on monitoring user activity to build the model; the best way is in production. That means your model is based on what is in production rather than what is in development and needs testing soon (chicken and egg problem). In theory, you can monitor your product managers or QA team using the new feature in a test environment, but that is unlikely to match the diversity of your user base.
The second problem is that your users will only use some of the features in your product. Sometimes, that is by design, like administrative features rarely used. In other cases, it is just the nature of the specific domain, such as benefits. Most people in the US will only use the benefits features in their HR systems once per year. In other words, you will always have a partial model of the application under test.
Natural Language Processing (NLP) Test Authoring
This approach is similar to the first one, as you must create something independently. However, it differs in two critical aspects: you do not need a DSL, and what you define are requirements or tests. Because you are taking advantage of NLP, you can write the text in plain English. One company called testRigor combines this approach with the user journey-based approach. Other companies using NLP are Sauce Labs (acquired AutonomIQ), testsigma, ACCELQ and opkey.
The curious aspect of this approach is that I wonder if you can call it a test generation approach. Reviewing most of these vendors' websites, you see the phrase “codeless automation” or some variation on the theme. In other words, they know they are not generating tests. They are abstracting to a higher level the writing of test automation scripts.
Even if you consider the English language sentences a description of the model, is that a complete model? Raise your hands if your product has documentation of requirements for all your features. Yes, I know that is rare. For those of you who are answering, we have user stories for all our work. Are user stories requirements? Are they comprehensive requirements? No, they are not.
Generating Tests
These approaches will generate test cases using the model or content they captured. To do so, you must use the well-known software testing methodologies to create the different test cases you need. These techniques have existed for decades and are renowned by experienced software testing experts.
Final Thoughts on the State of Artificial Intelligence Test Design
Overall, the approaches we see in the marketplace for artificial intelligence test design do not significantly improve software testing. Tracking what users do and adding that to your model has value, but it will not provide what you need when needed. It is too late if you depend on user monitoring in production. Handcrafting application models using code or some DSL makes the problem worse.
Improving test automation development and execution are good ideas. However, they are incremental improvements on the status quo and do not address the fundamental issues with software testing.
In the final section of this series, we will discuss the Testaify approach and how it differs from what is currently in the marketplace.
Part 3, The Path to Autonomous Testing
In my experience, great QA people shared a never-ending nagging feeling that they need more tests. They know the truth behind software testing: The fundamental challenge regarding testing is the vast number of combinations required to test a product exhaustively.
To improve software testing significantly, we must address three essential issues.
- Designing comprehensive test suites is difficult and time-consuming. Most people in the industry do not know all the testing methodologies needed to create them.
- Human-generated test automation wastes time and money. It is fragile and built on a foundation of incomplete or poorly designed tests.
- For too long, the information about the product’s quality has been scattered or sometimes is completely missing. Most companies make release decisions based on limited information and sometimes on one person's opinion.
We address these issues by building a continuous comprehensive testing platform that implements a continuous autonomous testing cycle. The first step in our platform is the AI discovery engine.
Discover Application Under Test with Artificial Intelligence
The Testaify approach is a bot-crawling approach. Essentially, Testaify discovers the application. The only information it needs is the URL and credentials. After that, our AI worker bees start navigating the application and building a model of it. The model identifies all the states and transitions of the application under test (AUT).
The Testaify model looks like this:
This approach has several advantages:
- It builds a model of the whole application under test (AUT). If we can navigate to it, we will add it to the model.
- Building the model does not depend on any external artifact or source.
- It does not require human interaction to maintain the model. The AI discovery engine builds it.
- It works on your test environment with development builds. You do not have to wait until you release it to production to track users' journeys, capture logs, and get a model.
- It interacts with your application the same way a user will. It does not require special access to your source code or environment.
- It creates a baseline of the AUT that identifies all its paths, enabling Testaify to provide a comprehensive test suite.
- As it builds the model, the AI discovery engine identifies the domain of the AUT. If your application is for supporting farmers, it will recognize your domain as being related to agriculture.
- Creating the AUT model using a state transition diagram allows us to implement one of the most crucial testing methodologies—a methodology most people in the industry do not know how to use effectively.
Many vendors claim their platform is an autonomous testing platform. Your platform can only be autonomous if you can automate the discovery process. You need it to implement a solution for test and data generation successfully.
Design Tests via Artificial Intelligence
We can design test cases effectively because we have a complete model and understand your application's domain. Testaify will know all the testing methodologies. Like most artificial intelligence systems, Testaify can use all available information and apply the techniques consistently. Some testing methodologies are so time-consuming that many human testers stop using them. Testaify can use them all and implement them in a matter of minutes.
The Testaify commercial offering will use four testing methods. We provide a free version called Essentials that uses two of the four methodologies. The methodologies we are starting with are:
- State Transition Coverage
- Basis Path Coverage
One of the most time-consuming aspects of software testing is data setup. Some products are so complex that they require an incredible amount of time to set up before you can run a single test. While test automation helps accelerate data setup, the important part is our choices about the data we need for setup and testing. Understanding context is challenging for a computer, but with the emergence of artificial intelligence techniques, it is a lot easier today to emulate that unique human capability.
Testaify will design and execute hundreds or even thousands of tests in minutes using the complete model and the testing methodologies. Because Testaify designs and executes tests using artificial intelligence, we address the first two significant issues with software testing today. As we add more capabilities, such as additional functional testing methodologies, performance, security, and usability testing, we will address the third significant challenge with software testing. We will bring all the information you need to assess the quality of your product in one place.
The artificial intelligence revolution in software testing has created many companies. The market opportunity is so significant that new ones keep coming up. Just look at Antithesis's $47 million seed round last February. We now have companies like:
- Wopee - Developer of software testing bot designed to reduce waste in software development and testing. The company's product is an artificial intelligence and machine learning-based autonomous testing platform with features such as automated planning, creation, maintenance, analysis of tests, streamlining of the testing process by reducing the need for human intervention, and more, helping the clients improve efficiency and removing testing waste to offer better software services.
- Qualiti - Developer of an artificial intelligence-powered platform designed to test software products with no human input. The company's test automation tool specializes in creating, maintaining, executing, and triaging tests, enabling users to focus on creative tasks in QA and the end-user experience.
- Quacks.ai - Developer of automation software designed to automate front-end software testing with no code. The company's software makes automatic testing easy by using generative artificial intelligence (AI) models to mimic human cognition and replace tedious manual testing, enabling businesses to remove unnecessary technical friction with customers.
Like Testaify, these companies all started in this decade. We all see the problem with the old generation of artificial intelligence testing platforms (founded in the 2010s). We all focus on simplifying the setup as much as possible to get to autonomous testing. We all use bot-crawling to discover the application under test. Curiously, all the companies taking this approach like to use an animal. Testaify uses bees, Wopee uses monkeys, and Quacks.ai, as their name implies, uses ducks.
But our approaches differ in meaningful ways. Wopee combines bot-crawling with visual testing (visual assertions), which results in automating visual testing rather than automating testing. Qualiti combines bot-crawling with user journey monitoring, bringing all the user journey problems.
The next generation of artificial intelligence solutions is here! Finally, autonomous testing is arriving! The Future of Testing is closer than you think!
About the Author
Testaify founder and COO Rafael E. Santos is a Stevie Award winner whose decades-long career includes strategic technology and product leadership roles. Rafael's goal for Testaify is to deliver comprehensive testing through Testaify's AI-first platform, which will change testing forever. Before Testaify, Rafael held executive positions at organizations like Ultimate Software and Trimble eBuilder.
Take the Next Step
Join the waitlist to be among the first to know when you can bring Testaify into your testing process.