Intersecting Software Testing Principles & AI/ML Innovation
The seven software testing principles are simply a starting point. AI/ML will allow teams to test exhaustively and significantly improve product quality.
TABLE OF CONTENTS
- There are "Seven" Software Testing Principles
- Testing Principle #1. Testing shows the presence of defects
- Testing Principle #2. Exhaustive testing is not possible
- Testing Principle #3. Early testing
- Testing Principle #4. Defect clustering
- Testing Principle #5. Pesticide paradox
- Testing Principle #6. Testing is context-dependent
- Testing Principle #7. Absence-of-errors fallacy
- Starting the Testing Principles Reality Check (Part 1)
- Finishing the Testing Principles Reality Check (Part 2)
- Conclusion: None of these principles address the software testing's fundamental issues
There are "Seven" Software Testing Principles
For many years, people have been writing about software testing. One of the most popular blog posts is the famous seven software testing principles. These testing principles are inside the ISTQB (International Software Testing Qualifications Board) materials and in dozens of blog posts by many people and organizations within the software testing industry.
Here is the list of the seven software testing principles and their explanation (italics = text from different sources about the principles):
Testing Principle #1. Testing shows the presence of defects
Testing contributes to finding and fixing defects. However, this process doesn't mean there aren't any bugs in the product. This principle, which helps to set stakeholder expectations, means you shouldn't guarantee the software is error-free.
Testing Principle #2. Exhaustive testing is not possible
"Combinations of inputs and preconditions" are multiple. Testing is, therefore, a set of planned and scheduled activities with time and cost clearly defined. As testing them all is impossible, a test strategy is required to set out test objectives and prioritize test execution according to the risk analysis performed beforehand.
Testing Principle #3. Early testing
Early testing is the key to identifying defects in the requirements or design phase as soon as possible. It's much easier and less expensive to fix bugs in the early stages of testing than at the end of the software development lifecycle.
Testing Principle #4. Defect clustering
When a defect is found in a specific area of a software product, it becomes a potential cluster (a hotspot) with knock-on effects on related code areas. In other words, a few modules contain the most defects discovered during pre-release testing or show the most operational failures.
Testing Principle #5. Pesticide paradox
Tests must evolve and ensure they're still effective in finding undisclosed defects. The pesticide paradox principle is related to the defect clustering principle. Over time, when a hotspot has been fixed, the dynamic or static acceptance tests that are too often repeated will no longer reveal defects. Therefore, these would need continuous review so the focus can vary and be applied elsewhere.
Testing Principle #6. Testing is context-dependent
Criticality and way of testing are both related to the software/system context. If you compare military software to an e-commerce website, it's evident that neither will be similarly tested.
Testing Principle #7. Absence-of-errors fallacy
Testing is performed to assess whether the software/system fits its purpose and user expectations. Finding no defects does not mean that it fulfills its initial requirements.
Software Testing Principles Questions
While the seven software testing principles have been widely accepted, it's important to question their validity. Are these principles truly fundamental truths that guide our approach to software testing, or are they simply observations and advice?
Are the seven software testing principles all actual principles? I know several executives who will read some of these software testing principles and see them as excuses instead of principles. Personally, I find some of these software testing principles interesting observations, and some are good advice. Some feel redundant or repetitive, and a couple sound very generic.
Aside from their validity as principles, we also need to consider how these principles can adapt to the changing landscape of technology. With the rapid advancements in AI/ML, it's crucial to assess how these innovations impact our understanding and application of these principles. Can they still hold in this new context?
In the second part of this series, we will examine the software testing principles and see if they are affected by AI/ML.
Starting the Testing Principle Reality Check (Part 1)
As discussed in a previous blog post, the biggest challenge in software testing is the vast number of combinations required to test a product exhaustively. That reality is behind several of the testing principles. Let’s take the first one:
Testing principle #1: Testing shows the presence of defects.
This testing principle reminds me of a stand-up meeting many years ago at Ultimate Software. A Software Test Engineer (STE) reported a defect. One of the software engineers, trying to be funny, told him to stop creating defects (he meant the Jira ticket). The STE replied: “I do not create them; you create them; I only find them.”
Testing indeed shows the presence of defects. However, testing cannot guarantee that the software is error-free because testing all combinations is impossible. While AI cannot change the software testing principle, what we understand to be current software testing’s limits can be pushed further.
While conducting our first ROI (Return On Investment) analysis of our product during alpha testing, we found that Testaify can discover the application over 100 times faster than a very experienced Software QA Architect. We also found that Testaify could discover, generate, and execute more than 500 tests for a CRM application in less than 55 minutes.
- How long does a manual tester take to discover an application with over 30 web pages and 220 navigation paths?
- How long will it take the same manual tester to design 500 test cases?
- How long will a Software Test Automation Engineer take to automate 500 test cases?
- How long will it take to execute the test automation for those 500 test cases?
We understand if you are having difficulty coming up with a number. In our case, it takes Testaify less than 55 minutes! Now, let’s look at the second testing principle.
Testing principle #2: Exhaustive testing is not possible.
Again, the combinations make exhaustive testing impossible today. But can we come closer to exhaustively testing the product? We can allow Testaify to generate many more test cases than usual. Specifically, we could create and execute 500 test cases in an hour using the same CRM app mentioned before. Our cloud architecture allows us to scale horizontally and accelerate test generation and execution. We just need more AI workers.
I owned a Tesla Model S. While my model does not have this feature, the ludicrous mode was a famous feature of certain Tesla Model S vehicles. The name comes from the Mel Brooks “Spaceballs” movie. As the name suggests, it is about speed. One of the episodes of the HBO series Silicon Valley gives you a sense of what it is. Granted, it is like watching a Tesla commercial. Here is the link. Take a moment, watch the video, and then come back.
Now imagine a ludicrous mode for Testaify in light of the software testing principles. In that mode, you can ask Testaify to design and execute as many test cases as possible in 12 or 24 hours. Our projection suggests that the number of executed test cases in 12 hours can exceed 6,000. Is that close to exhaustive testing? No, you need to get in the millions for exhaustive testing. But imagine if you want to test your app. Can you generate and automate 6,000 tests in 12 hours? What about 12,000 tests in 24 hours? A team will need years to get to such a number.
Of course, if those 6,000 tests generate 5% of findings, then humans must validate 300 findings. Our preliminary results suggest that what is possible with AI/ML and a horizontally scalable platform can change our understanding of the first two software testing principles.
AI/ML impact on software testing principles
In theory, the power of AI/ML, combined with the cloud, can allow you to generate many more tests. Interestingly, we have learned that the number of AI workers needed will put too much pressure on the web app. It will be a performance bottleneck, limiting how many functional test cases you can generate.
I do not know about you, but I find this very cool. I hope you do, too. Let’s review one more software testing principle.
Testing principle #3: Early Testing
Early testing is good. A comprehensive testing strategy should treat testing as an activity practiced at each development stage. We are big fans of BDD, which is a great way to build quality into your development process.
We talked about how AI/ML helps teams implement unit testing. Many tools exist to help developers with unit testing; just check them out here. Also, tools like Testaify integrate with your CI/CD pipeline to help you reduce the cycle time and find defects sooner rather than later. We should never forget Boris Beizer's quote: More than the act of testing, the act of designing tests is one of the best bug preventers known.
Early testing is a critical approach to improving quality. I am unsure if it should be a software testing principle, but we should never forget it. It is important to remember that early testing is a good idea, but it must be part of a comprehensive testing strategy.
Keep reading as we finish reviewing the software testing principles and examine how they might be affected by AI/ML.
Let's Complete the Seven Software Testing Principles Reality Check (Part 2)
We continue reviewing the software testing principles and how AI/ML might impact them, precisely the emerging tools that can implement autonomous testing.
Testing principle #4: Defect Clustering
It is incredible how the Pareto principle (usually called the 80/20 rule) appears in many different contexts. This software testing principle describes how it shows in testing. Generally speaking, it says that 20% of your product is responsible for 80% of your defects, hence the name defect clustering.
While it is true that defects might cluster at specific periods, we will need to experiment to see if this meets the Pareto principle. The problem is that most QA teams do not have time to validate this. Even if a QA team has the time, it will be challenging to thoroughly test every system module, implementing the same software testing techniques in the same way. Most teams accept this principle without questioning its validity. Should defect clustering be a software testing principle? Can AI/ML help us answer this question? Is defect clustering in software testing an example of the Pareto principle? Interestingly enough, a solution like Testaify can help you get a better answer than you have today.
Because Testaify discovers the application in each test session and runs the same testing methodologies in the same way, every time, the reported findings can serve as the data to test the validity of this principle. Testaify can do this without taking time away from your QA team.
If Testaify shows you a defect cluster in a specific area, you can direct your QA team to spend more time there. Testaify can serve as an early warning system, pointing your team to the place with the highest risk profile.
Testing principle #5: Pesticide Paradox
The pesticide paradox software testing principle is related to the previous defect clustering testing principle, which is also why we used the phrase “specific periods” in our description. As defects cluster in one area, all your team members focus there. That means you are taking care of the bugs (applying pesticide), and at some point, that cluster will disappear.
In other words, if you keep testing the same area at some point, you will stop finding defects. Specifically, if you keep testing the same area with the same type of tests, you will stop finding specific kinds of bugs.
The pesticide paradox is why we are designing Testaify to include as many software testing techniques as possible. If your QA team is only proficient in two or three software testing methodologies, eventually, your product will stop showing those types of defects. That does not mean no bugs are left; your testing has become ineffective. You must go beyond including software testing techniques. Testaify’s Oracle, an AI/ML engine, continuously improves test design to keep testing as effective as possible.
To break the metaphor at the core of this software testing principle, software bugs do not tend to evolve and become immune to specific techniques. In the context of software, it is possible to reach a point where what is left is a high-quality product with few significant bugs.
Testing principle #6: Testing is context dependent
As a context-driven person, I do not disagree with this statement, but times are changing. I have written before about the changing expectations regarding product quality. Some recent examples are this post and this other post.
The text explaining this principle says military software and an e-commerce website are not tested similarly. While that is true, expectations for e-commerce websites have increased in the last few decades.
The focus on user experience has forced those two to get closer than at any time in the past. Unique requirements will continue to exist to keep this difference in place, but the gap is narrowing. This pattern is one of the main reasons behind the creation of Testaify. Many teams need help raising their quality level. That means improving their testing strategy by covering Marick’s Agile Testing Matrix and increasing test coverage.
Testing principle #7: Absence-of-errors fallacy
Yes, the absence of defects does not mean there are no defects. Again, this is true because testing all combinations is impossible. There is no way to resolve this issue unless it is a tiny application.
Conclusion: None of these principles address the software testing's fundamental issues
As I suggested, the software testing principles are not all principles. Some are good suggestions or clever observations, but it is not clear that they are all fundamental truths or propositions, as is expected of a principle. Most are obvious and depend on the same key constraint. None address the fundamental issues affecting software testing.
What matters is not the list of software testing principles. There is nothing particularly terrible about them, but generally, they do not help you address the essential challenges in software testing. I do not think knowing them will cause serious harm, but ignoring the principles and focusing on the fundamental constraints and critical issues that prevent testing from facing that challenge is probably more helpful.
The fundamental challenge regarding testing is the vast number of combinations required to test a product exhaustively.
That constraint drives everything you need to know about software testing. To address this, we learned that testing is sampling. We have to choose what, where, and how to test. This constraint is the reason behind the software testing methodologies.
To face it, we need to address these three key issues:
- Designing comprehensive test suites is difficult and time-consuming. Most people in the industry do not know all the testing methodologies needed to create them.
- Human-generated test automation wastes time and money. It is fragile and built on a foundation of incomplete or poorly designed tests.
- For too long, the information about the product’s quality has been scattered or sometimes is completely missing. Most companies make release decisions based on limited information and sometimes on one person's opinion.
Ask yourself how we can address these three problems. Is there anything out there that can help?
Our answer is Testaify, an autonomous software testing platform that discovers, designs, executes, and reports on your web-based applications. We believe Testaify will address these three software testing issues and enable development teams to improve their testing significantly.
About the Author
Testaify founder and COO Rafael E. Santos is a Stevie Award winner whose decades-long career includes strategic technology and product leadership roles. Rafael's goal for Testaify is to deliver comprehensive testing through Testaify's AI-first platform, which will change testing forever. Before Testaify, Rafael held executive positions at organizations like Ultimate Software and Trimble eBuilder.
Take the Next Step
Join the waitlist to be among the first to know when you can bring Testaify into your testing process.