Have you said, “AI won’t help me as much as I thought?”
If you need a car, a drawing isn't good enough. You might feel that way when you ask ChatGPT to write your test cases; you'll get templates, but that's not enough.TABLE OF CONTENTS
AI to the Rescue! - Part 2
This blog post is the second in a long series. If you still need to read the first post, the Heartbreaking Truth about Functional Testing, please check it here.
We previously discussed what is required to achieve comprehensive functional testing. Today, only some organizations can conduct comprehensive functional testing. It is challenging for an organization to hire testers who know all the testing methodologies well enough to get there. The reality is that most organizations and testers need help to identify the gap in their functional test suite.
Now, AI can help close the gap. The challenge is to build a solution that can purposely do so. An AI-first software testing platform can get you there. Even that can be a challenge if you don’t know what is required.
Is ChatGPT the key to writing test cases?
When ChatGPT became available to the masses, many wrote articles about how it would replace testers. Then, other people wrote counter-arguments attacking that position. Interestingly, you can find examples on both camps that clearly show how little both sides understand testing.
Let me give you some examples. One of the original posts asks ChatGPT something like this:
Generate test cases to test a web-based login page.
I am not going to show everything ChatGPT generated. Here is a snippet:
As you can see, ChatGPT started by generating some “functional tests.” Let’s take a look at the first one:
Valid credentials: Enter a valid username and a valid password. Ensure that you can log in successfully.
Is this suggestion helpful or valuable? It depends on the context. Suppose you are a small company with two developers. No one knows about testing, and you do not know how to test; at least ChatGPT gave you a list of test ideas. On the other hand, If you are familiar with testing, you know what you just got from ChatGPT is not a test case. It’s using “use case testing,” but even the use case is not clearly defined—there are many open questions: What is a valid username? What is a valid password? How do I know I log in successfully?
ChatGPT depends on your prompt. In other words, on what you ask and how you ask it. As another example, if you tell ChatGPT to:
Prepare a set of use case tests to test file upload,
you'll get something like this (another snippet):
ChatGPT identified 11 use case paths and 13 different “test cases.” Sorry for the quotes, but these are not real “test cases” yet. They are getting closer. A test case must have a single objective, a clear set of input values, and an expected output. You can add more information to a test case, but the core components are the three I mentioned.
I can continue to improve the prompt until I get test cases. After that, I will have to execute them. ChatGPT can help you with that, too. I am not going into every example of how ChatGPT can help you with every step. The following blog post provides examples: https://research.aimultiple.com/chatgpt-test-automation/
Can ChatGPT stand up to a real-world application?
ChatGPT can help you generate data, analyze your results, etc. This functionality looks great until you look deeper at all these blog posts and notice a pattern: Each one shows examples of generic functionality like log-in and file upload.
I asked ChatGPT to create a specific set of use case tests with valid data and a detailed expected result for a legal practice management web-based application. The prompt is too broad, so ChatGPT gave me a sample and did not provide specific data. It did provide the essential use cases for legal practice management. Rather than actionable steps, you are getting a template. It is like asking for a car but getting a drawing of a car instead.
At this point, you might be saying, “Well, AI will not help me as much as suggested.” That may not be correct. First, you can use specifically targeted prompts to get clear test cases. Here is a prompt that generates actual test cases:
Generate boundary value test cases with specific input values for a field that captures the year of a car.
Give it a try if you have the time. ChatGPT will make some assumptions to generate them, but those assumptions are not terrible.
There are a lot of resources out there about Prompt Engineering. Feel free to check them out. They will help you take full advantage of AI LLM solutions like ChatGPT.
All these examples are just using AI to accelerate tasks. That is great as it can reduce your development cycle time considerably. These task acceleration improvements are the first phase of AI. I like to call them first-generation AI solutions.
Can we do more with AI?
Yes, we can. That is why we are building Testaify. We are building an AI-first solution. We want to reimagine testing and make something to become a team member. Don’t deny it, Mr. Developer: you always dreamed of having an AI team member.
So, let’s discuss a typical software startup scenario. The company builds a product. They have a small team of developers; let’s assume three. Their product is getting market traction but is getting inundated with customer-found defects. Their current regression suite is so tiny that it barely covers the primary use cases' main success scenarios (happy path).
The company brings a tester to the team to help them create a regression test suite that provides better functional coverage than what they have now. What is the first thing the new tester is going to do? The first thing the new tester needs to do is to learn the application. She might use requirements documentation or explore the system independently if such documentation does not exist. She is engaging in discovery.
The tester is trying to understand the domain model of the application. She is trying to understand all its subtle but essential business rules. She is trying to understand the whole structure of the application. She is building a model of your application.
The problem is, unless your application is tiny, building this model of the application will take a long time. You will need weeks, maybe months, but you do not have weeks to do it well. Somebody hired you to improve the regression test suite. When are you going to start testing? We all know what I am talking about. Most managers and executives do not know anything about software testing.
So, at some point, the tester will have no choice but to stop discovery and start designing test cases. Can AI help here? Yes, an AI discovery engine can fully discover your application in minutes or hours, depending on the size of your application. The key takeaway is that it will take a person weeks or months to achieve what Testaify can do in minutes or hours.
The next step is to design test cases. Can a product like Testaify know all the software testing methodologies? You bet it does. Armed with a complete model of your application, Testaify can analyze and apply all the testing methodologies and create a comprehensive functional test suite.
If you get too many emails and phone calls from software testing tool vendors, here is an easy way to get them to stop: Ask them, ”How does your tool discover my application in minutes and then generate and execute a comprehensive test suite using all the known functional testing methodologies?” None of them can do it. No more calls; they cannot sell to you.
If you want to join the future of testing, it is time to join Testaify’s waitlist. You might just fall in love. But if your new AI team member breaks your heart, we can’t do anything about that. Sorry! So, don’t get too attached to your new team member. I am talking to you, the developer who watches a lot of sci-fi. That’s why our AI workers look like radar bees, not Number Six from BSG.
Hey, where are you going? What about performance, usability, and security? Do not worry. We have more posts coming soon. Follow us so you can get them as soon as they get published.
Continue this series:
About the Author
Testaify founder and COO Rafael E. Santos is a Stevie Award winner whose decades-long career includes strategic technology and product leadership roles. Rafael's goal for Testaify is to deliver comprehensive testing through Testaify's AI-first platform, which will change testing forever. Before Testaify, Rafael held executive positions at organizations like Ultimate Software and Trimble eBuilder.
Take the Next Step
Join the waitlist to be among the first to know when you can bring Testaify into your testing process.