The Heartbreaking Truth About Your Functional Test Efforts
Functional testing could break your heart... but not for long! Continuous comprehensive testing powered by AI will change testing forever.
AI to the Rescue!
This blog post is the first in a long series. We recently introduced the concept of Continuous Comprehensive Testing (CCT), and we still need to discuss in depth what that means. This series of blog posts will provide a deeper understanding of CCT.
In our introductory CCT blog post, we said the following:
Our goal with Testaify is to provide a Continuous Comprehensive Testing (CCT) platform. The Testaify platform will enable you to evaluate the following aspects:
- Functional
- Usability
- Performance
- Accessibility
- Security
While we cannot offer all these perspectives with the first release, we want you to know where we want to go as we reach for the CCT star.
Let’s break this roadmap down one at a time.
According to Wikipedia, Functional software testing is conducted to evaluate the compliance of a system or component with specified functional requirements. Functional software testing usually describes what the system does.
Wikipedia also says that functions are tested by feeding them input and examining the output.
Is that a good definition of functional software testing? Maybe. The first two sentences might contradict each other.
In the first one, functional test efforts only evaluates for compliance against specified functional requirements. In the second sentence, it describes what the system does. Is everything the system does part of the specified functional requirements? In my experience, no; that is never the case.
One of the most severe problems in software development is that the most challenging and expensive defects tend to exist because they were not included in the specified functional requirements or improperly stated in such requirements. It is heartbreaking to find one of these issues in your product.
If I find a flaw during my functional software testing that was not part of the specified functional requirements, is that a defect? If it bothers or confuses a user, it is a defect. It does not matter if it was part of the specified functional requirements. I am a Lean person; as such, I see value through the eyes of the customer. The same applies to quality.
Functional test efforts describe the process of inquiring how the system works. That inquiry might reveal compliance with functional requirements or lack thereof. Even more interestingly, it could expose new questions we have never considered before. Functional testing makes inquiries of the system by defining a set of test cases. Each test case has a single objective, a set of inputs, and expected outputs (the fewer the expected outputs, the better – in most cases, just one is ideal). Each test case hypothesizes how the system’s particular function works.
Now, if you read posts on functional test efforts, they mostly talk about types of testing. They write about unit, smoke, sanity, etc. testing as if that is the thing you need to know.
Also, such posts try to define these types and, in the process, create a mess that validates the perception that the testing community is the least advanced of the different career paths in software development.
If you want to see what I am talking about, simply use Google and search for “Functional testing definition.” Read a few of the links that you get back from the usual software testing vendors that only care about keeping the status quo intact. See if you can find multiple examples saying the same thing. It is like everybody got their information from the same source.
Only one link, not from a vendor website, actually talks about the most crucial part, even if it needs to do a better job. What is that part? The most essential part of functional software testing is how you inquire about the product.
The essential problem stated in that blog post is that attempting to author every kind of test is not only impossible but also time-consuming and expensive. This reality is the heartbreaking truth of software testing, at least for those testers who are system thinkers.
A few years back, I tried to explain to a group of executives this exact problem. We had a very complicated system with a vast number of paths. I took one web page from the product, representing less than 1% of the overall solution. I asked the executives to tell me how many test cases we needed to test that page thoroughly. The guesses were in the dozens, hundreds, or thousands. I walked them through the math of what you will need to test that single web page exhaustively, and the number came up in the millions.
The number of test cases that can be defined and automated is just a tiny percentage of all the possible test cases you will need to test the specific functionality exhaustively. In other words, testing is sampling.
So, the critical question you need to answer is: what is the best sample of test cases I need for this functionality? You can ask the same question in many forms: How do I choose which test cases to create? What test cases will reduce the risk of severe issues in the system?
You never test as much as you would like. You have to make choices and pick what to test. The answer is to use software testing methodologies to help you reduce specific types of errors in your system.
Do you know the software testing methodologies? If not, then you will need to learn:
- Use case testing
- Equivalence class testing
- Boundary Value testing
- Decision table testing
- State Transition testing
- Pairwise testing (Orthogonal arrays for the mathematically inclined)
- Domain analysis testing
How do you comprehensively test the functionality of your system? You will need these methodologies to create a comprehensive functional test suite.
How many of these methodologies do you know? Of the ones you know, how well do you know them? Can you create a comprehensive functional test suite all on your own?
Most people do not know them all. In many cases, the ones they know they have not mastered yet.
Imagine creating a comprehensive functional test suite and executing it in a few hours or, better yet, in minutes. There is no need to create test automation scripts. You also don’t have to master all the testing methodologies. You don’t have to feel the never-ending anxiety that you do not have enough time to generate enough test cases. Finally, you have time for exploratory testing. You are no longer praying for quality in every release.
That is what Testaify brings you. That is functional software testing the AI way. If you want to learn more, make sure to subscribe to this blog.
Continue this series:
About the Author
Testaify founder and COO Rafael E. Santos is a Stevie Award winner whose decades-long career includes strategic technology and product leadership roles. Rafael's goal for Testaify is to deliver comprehensive testing through Testaify's AI-first platform, which will change testing forever. Before Testaify, Rafael held executive positions at organizations like Ultimate Software and Trimble eBuilder.
Take the Next Step
Join the waitlist to be among the first to know when you can bring Testaify into your testing process.