Intersecting Software Testing Principles & AI/ML Innovation
The seven software testing principles are simply a starting point. AI/ML will allow teams to test exhaustively and significantly improve product quality.
TABLE OF CONTENTS
- There are "Seven" Software Testing Principles
- Testing Principle #1. Testing shows the presence of defects
- Testing Principle #2. Exhaustive testing is not possible
- Testing Principle #3. Early testing
- Testing Principle #4. Defect clustering
- Testing Principle #5. Pesticide paradox
- Testing Principle #6. Testing is context-dependent
- Testing Principle #7. Absence-of-errors fallacy
- Starting the Testing Principles Reality Check (Part 1)
- Finishing the Testing Principles Reality Check (Part 2 - Coming soon!)
There are "Seven" Software Testing Principles
For many years, people have been writing about software testing. One of the most popular blog posts is the famous seven software testing principles. These testing principles are inside the ISTQB (International Software Testing Qualifications Board) materials and in dozens of blog posts by many people and organizations within the software testing industry.
Here is the list of the seven software testing principles and their explanation (italics = text from different sources about the principles):
Testing Principle #1. Testing shows the presence of defects
Testing contributes to finding and fixing defects. However, this process doesn't mean there aren't any bugs in the product. This principle, which helps to set stakeholder expectations, means you shouldn't guarantee the software is error-free.
Testing Principle #2. Exhaustive testing is not possible
"Combinations of inputs and preconditions" are multiple. Testing is, therefore, a set of planned and scheduled activities with time and cost clearly defined. As testing them all is impossible, a test strategy is required to set out test objectives and prioritize test execution according to the risk analysis performed beforehand.
Testing Principle #3. Early testing
Early testing is the key to identifying defects in the requirements or design phase as soon as possible. It's much easier and less expensive to fix bugs in the early stages of testing than at the end of the software development lifecycle.
Testing Principle #4. Defect clustering
When a defect is found in a specific area of a software product, it becomes a potential cluster (a hotspot) with knock-on effects on related code areas. In other words, a few modules contain the most defects discovered during pre-release testing or show the most operational failures.
Testing Principle #5. Pesticide paradox
Tests must evolve and ensure they're still effective in finding undisclosed defects. The pesticide paradox principle is related to the defect clustering principle. Over time, when a hotspot has been fixed, the dynamic or static acceptance tests that are too often repeated will no longer reveal defects. Therefore, these would need continuous review so the focus can vary and be applied elsewhere.
Testing Principle #6. Testing is context-dependent
Criticality and way of testing are both related to the software/system context. If you compare military software to an e-commerce website, it's evident that neither will be similarly tested.
Testing Principle #7. Absence-of-errors fallacy
Testing is performed to assess whether the software/system fits its purpose and user expectations. Finding no defects does not mean that it fulfills its initial requirements.
Software Testing Principles Questions
While the seven software testing principles have been widely accepted, it's important to question their validity. Are these principles truly fundamental truths that guide our approach to software testing, or are they simply observations and advice?
Are the seven software testing principles all actual principles? I know several executives who will read some of these software testing principles and see them as excuses instead of principles. Personally, I find some of these software testing principles interesting observations, and some are good advice. Some feel redundant or repetitive, and a couple sound very generic.
Aside from their validity as principles, we also need to consider how these principles can adapt to the changing landscape of technology. With the rapid advancements in AI/ML, it's crucial to assess how these innovations impact our understanding and application of these principles. Can they still hold in this new context?
In the second part of this series, we will examine the software testing principles and see if they are affected by AI/ML.
Starting the Testing Principle Reality Check (Part 1)
As discussed in a previous blog post, the biggest challenge in software testing is the vast number of combinations required to test a product exhaustively. That reality is behind several of the testing principles. Let’s take the first one:
Testing principle #1: Testing shows the presence of defects.
This testing principle reminds me of a stand-up meeting many years ago at Ultimate Software. A Software Test Engineer (STE) reported a defect. One of the software engineers, trying to be funny, told him to stop creating defects (he meant the Jira ticket). The STE replied: “I do not create them; you create them; I only find them.”
Testing indeed shows the presence of defects. However, testing cannot guarantee that the software is error-free because testing all combinations is impossible. While AI cannot change the software testing principle, what we understand to be current software testing’s limits can be pushed further.
While conducting our first ROI (Return On Investment) analysis of our product during alpha testing, we found that Testaify can discover the application over 100 times faster than a very experienced Software QA Architect. We also found that Testaify could discover, generate, and execute more than 500 tests for a CRM application in less than 55 minutes.
- How long does a manual tester take to discover an application with over 30 web pages and 220 navigation paths?
- How long will it take the same manual tester to design 500 test cases?
- How long will a Software Test Automation Engineer take to automate 500 test cases?
- How long will it take to execute the test automation for those 500 test cases?
We understand if you are having difficulty coming up with a number. In our case, it takes Testaify less than 55 minutes! Now, let’s look at the second testing principle.
Testing principle #2: Exhaustive testing is not possible.
Again, the combinations make exhaustive testing impossible today. But can we come closer to exhaustively testing the product? We can allow Testaify to generate many more test cases than usual. Specifically, we could create and execute 500 test cases in an hour using the same CRM app mentioned before. Our cloud architecture allows us to scale horizontally and accelerate test generation and execution. We just need more AI workers.
I owned a Tesla Model S. While my model does not have this feature, the ludicrous mode was a famous feature of certain Tesla Model S vehicles. The name comes from the Mel Brooks “Spaceballs” movie. As the name suggests, it is about speed. One of the episodes of the HBO series Silicon Valley gives you a sense of what it is. Granted, it is like watching a Tesla commercial. Here is the link. Take a moment, watch the video, and then come back.
Now imagine a ludicrous mode for Testaify in light of the software testing principles. In that mode, you can ask Testaify to design and execute as many test cases as possible in 12 or 24 hours. Our projection suggests that the number of executed test cases in 12 hours can exceed 6,000. Is that close to exhaustive testing? No, you need to get in the millions for exhaustive testing. But imagine if you want to test your app. Can you generate and automate 6,000 tests in 12 hours? What about 12,000 tests in 24 hours? A team will need years to get to such a number.
Of course, if those 6,000 tests generate 5% of findings, then humans must validate 300 findings. Our preliminary results suggest that what is possible with AI/ML and a horizontally scalable platform can change our understanding of the first two software testing principles.
AI/ML impact on software testing principles
In theory, the power of AI/ML, combined with the cloud, can allow you to generate many more tests. Interestingly, we have learned that the number of AI workers needed will put too much pressure on the web app. It will be a performance bottleneck, limiting how many functional test cases you can generate.
I do not know about you, but I find this very cool. I hope you do, too. Let’s review one more software testing principle.
Testing principle #3: Early Testing
Early testing is good. A comprehensive testing strategy should treat testing as an activity practiced at each development stage. We are big fans of BDD, which is a great way to build quality into your development process.
We talked about how AI/ML helps teams implement unit testing. Many tools exist to help developers with unit testing; just check them out here. Also, tools like Testaify integrate with your CI/CD pipeline to help you reduce the cycle time and find defects sooner rather than later. We should never forget Boris Beizer's quote: More than the act of testing, the act of designing tests is one of the best bug preventers known.
Early testing is a critical approach to improving quality. I am unsure if it should be a software testing principle, but we should never forget it. It is important to remember that early testing is a good idea, but it must be part of a comprehensive testing strategy.
Check out the final part of this blog post next week as we finish reviewing the software testing principles and examine how they might be affected by AI/ML.
About the Author
Testaify founder and COO Rafael E. Santos is a Stevie Award winner whose decades-long career includes strategic technology and product leadership roles. Rafael's goal for Testaify is to deliver comprehensive testing through Testaify's AI-first platform, which will change testing forever. Before Testaify, Rafael held executive positions at organizations like Ultimate Software and Trimble eBuilder.
Take the Next Step
Join the waitlist to be among the first to know when you can bring Testaify into your testing process.