Development teams everywhere wonder if a comprehensive Agile testing strategy can even exist.
Post by Jan 27, 2025 9:46:16 AM · 8 min read

A Comprehensive Agile Testing Strategy: Does one exist?

Many teams face gaps in their Agile testing strategy. Is covering all four of Marick's test matrix quadrants for comprehensive software quality feasible? 

TABLE OF CONTENTS

Do Agile teams know what testing needs to happen and when?

In a recent blog post, we talked about the trend of “Testing without Testers.” In the last decade, more organizations have assembled development teams without people in a dedicated testing role.

We shared some examples and some of the key challenges with this trend. However, we did not discuss an underlying assumption. That assumption is a comprehensive testing strategy. In other words, many teams assume they know what testing needs to happen and when. That “Agile comprehensive testing strategy” does not require people with a dedicated testing role.

Brian Marick’s Agile Testing Quadrants

At the center of Agile testing, you find Brian Marick’s Agile testing matrix:

The Marick Test Matrix helps us consider quality holistically.

Quadrant 1: Support programming / Technology facing

The first quadrant concerns supporting programming from a technology-facing perspective. Traditionally, this has meant unit testing. While Agilistas will push for practices like test-driven development (TDD), only a small percentage of software engineers will use it. Still, old-fashioned unit testing using white box testing techniques has existed longer than Agile.

Quadrant 2: Support programming / Business facing

The second quadrant concerns supporting programming from a business-facing perspective. This quadrant approach underwent several iterations in the early years of Agile software development. It is generally considered the quadrant for behavior-driven development (BDD). By the way, if you genuinely implement the BDD process, quadrant 2 happens before quadrant 1. In BDD, the scenarios drive the unit tests. 

From a software testing perspective, quadrant 2 defines your acceptance testing suite. Usually, that means using use case testing and ensuring you have BDD scenarios for the primary and alternative paths associated with the specific story or feature you are working on. In essence, you define your minimum regression suite in this quadrant. In the Agile worldview, that regression suite must be fully automated.

Quadrant 3: Critique Product / Business facing

The third quadrant is about critiquing the product from a business-facing perspective. Before Agile, most of the functional testing happened in this quadrant. More advanced organizations will have a lot of GUI test automation. In very few organizations, usability testing will occur in this quadrant. Most QA teams lived in this quadrant. The prevalent Agile approaches to this quadrant represent a radical departure from the traditional testing strategy. These approaches are why many modern Agile teams do not have people in a testing role. In theory, Agile embraces exploratory testing, and this quadrant is the place for it, but in practice, few teams practice exploratory testing.

Quadrant 4: Critique Product / Technology facing

The fourth quadrant concerns critiquing the product from a technology-facing perspective. It means conducting all the complex testing needed for everything inside the performance and security umbrellas. Only organizations with enough resources will build these teams. In many instances, this quadrant never gets implemented. It does not help that many performance and security testing tools are costly. I was lucky as I had the opportunity to manage the performance testing team at Ultimate Software.

Fundamentally, you must implement all four Brian Marick Agile Testing Matrix quadrants to create a comprehensive testing strategy. What kind of testing strategy patterns exist? We will discuss some we have seen during our careers.

Agile Testing Strategy Patterns

The most extreme pattern I have seen is the one pushed by ThoughtWorks many years ago at Ultimate Software. At the time, their philosophy insisted that all code must be written using pair programming and that only unit tests are needed. In other words, only the first quadrant matters, and we will deal with the rest as it emerges. With this strategy, you do not need testers. It does not take long to realize that this strategy will not work.

This Agile orthodoxy was prevalent in the first decade of the 21st century. But even then, ThoughtWorks employees sought alternatives to this TDD-only approach.

The second pattern I see widely adopted with minor differences is where you shift left and primarily focus on quadrants 1 and 2. These teams will have unit and integration tests in their CI/CD pipelines. The best teams implementing this approach will use BDD in their development process, but this testing strategy has many variants. Some will focus more on unit tests over integration tests (testing pyramid), or vice versa, with more integration than unit tests (testing trophy). Some will even use BDD for UI tests.

These teams struggle with production defects. Because no one spends time in quadrants 3 and 4, many production issues come up. Again, these teams believe they do not need someone in a testing role, as all their unit and integration tests are automated and defined before the coding starts. As long as the developers get with the product and UX teams, they have everything they need, and so it goes with this testing strategy.

Another issue with this testing strategy pattern is in quadrant 2. If no team member understands use case testing well, they create an incomplete set of BDD scenarios and miss use case paths. If left unchecked, this is one of the most challenging defects in production.

A third testing strategy widely used today is accepting that quadrants 3 and 4 are shift-right endeavors, and you need a way to control the rollout. A popular approach is using feature flags. You release the new feature to a subset of your customers and monitor issues. If those customers are experiencing problems, you do not activate the new feature for the rest of your user population.

I worked with a team that used this approach, which the development team called the warranty period. In essence, the release happens in stages. As defects arise, they get fixed and deployed as quickly as possible.

We described this testing strategy as implementing a shift-left approach (quadrants 1 and 2) and a shift-right approach for the rest (monitoring production for quadrants 3 and 4 issues). Many Agile teams assume this pattern is the Agile way when they develop their testing strategy.

Part 2 of this post will analyze the challenges with these Agile testing strategy patterns.

CompTestStratEXIST-02-01

Challenges with Agile Testing Strategy Patterns

TDD-only Testing Strategy (Quadrant 1 only) 

The challenges with the TDD-only approach are apparent. You are building technology-facing tests and hoping that the feedback from those tests is enough to keep the quality of your product. Besides, how are you capturing and validating your requirements? You have a significant gap with this approach even from a narrow perspective of just a shift left perspective (quadrants 1 and 2 only).

Shift Left-only Testing Strategy (Quadrants 1 and 2) 

As mentioned, one of the biggest challenges with implementing unit and integration tests is capturing all the use case paths. There is an argument regarding the product team's role in this instance. Some may argue that a good product person will do a good job capturing the use case paths. That is possible, but there is a problem called reality. All the surveys I have seen about software development challenges always point to incomplete or missing requirements as the number one issue developers are dealing with. I have never met a software engineer who does not complain about the quality of the requirements provided by the product team.

Another challenge with having only unit and integration tests is that you are missing system tests. You have UI system tests even in the testing pyramid or trophy models. A complete testing strategy cannot exist without UI tests. Yes, both models reduced the number of this type of test, but neither asks to remove them altogether.

Agile Way Testing Strategy (Shift Left: Quadrants 1 and 2 / Shift Right: Quadrants 3 and 4) 

You may have noticed I avoided mentioning quadrants 3 and 4. I am leaving that for the final pattern, the so-called Agile way. In this pattern, you shift left with quadrants 1 and 2 and shift right for quadrants 3 and 4. What does it mean to shift right? My favorite definition is this one:

Shift right testing is a software development and testing approach that continues testing and development after the software is deployed. It's also known as testing in production. The goal is to ensure that the software performs, is available, and behaves correctly in production.

I described a typical example of this approach using feature flags. In this approach, a subset of users get the new capabilities in production. Using monitoring tools, the team checks to see if the new feature did not create any issues. Per the definition, the problems could be functional, or they could be performance issues. You can also check for security issues if you include security monitoring. You can also use feature flags for usability testing, as at least two groups provide data. The users with the new feature and the others still use the older product version.

There is nothing wrong with monitoring a subset of users. There is nothing wrong with trying new things in production. The question should be about the users. How do they feel about it? I worked with three organizations where we shifted right for some testing. In one case, we had a tiny budget. In these three organizations, I experienced different approaches. My experience suggests that some methods work better than others.

One approach that did not work well was when the users were not informed they were selected to try the new feature. In this case, the development team pushed the responsibility of exploratory testing onto these users without their consent. These users regularly complain that the system is unstable and of poor quality.

The same reaction happened in another company in a different context. In this context, the customers knew they were getting the feature first, but due to the nature of the business, many contractors had direct access to the same system. These contractors did not know they were getting the new feature. Like in the previous case, they constantly complained about the product's quality.

In two instances, I saw a better approach—one that at least produced fewer challenges. In both cases, the customers knew they were getting the feature first, and the team prioritized resolving any of their issues. This environment was more collaborative and responsive. It was also more transparent, with clearly defined lines of communication between the company and its customers.

Still, this approach involves outsourcing some of your testing to your users. Is that a good idea? In some contexts, it will work to an acceptable level. However, you still need to meet specific minimum requirements in every context. If you have a very slow application, excellent monitoring to react as quickly as possible does not matter; your users will suffer the consequences of no performance testing before releasing it to production.

Final Thoughts on Agile Testing Strategies

In all the surveys about big tech companies and their QA approaches, one company never seems to change its approach: Apple. Apple is famous for producing consistently better-quality products than all the other companies worldwide. It is one of the two big tech companies that still have QA team members. Do you wonder why? I do.

Apple understands that the issue with testing is that you can never test enough. The number of combinations is too high. Instead of trying to reduce QA costs, they know that you need to continue to add to your testing efforts. You need to continue to innovate regarding your testing approach. We are still searching for a solution to the testing problem. Still, Apple also knows not to push the responsibility to their customers.

Feature flags and monitoring are not the answer to quadrants 3 and 4. It might be a beginning, but these strategies do not meet the minimum requirements to build a high-quality product. To deliver a high-quality product, you must implement all four quadrants without outsourcing specific responsibilities to your customers.

At Testaify, we provide an autonomous testing platform that allows you to implement a comprehensive testing strategy without ignoring quadrants 3 and 4. Our initial release would increase functional testing coverage by implementing well-established testing techniques powered by our AI engines. It removes the need to invest large amounts in creating your test automation suite. Testaify discovers your application, designs and executes tests, and reports all its findings without human intervention. It is time for Autonomous Testing!

About the Author

Rafael E Santos is Testaify's COO. He's committed to a vision for Testaify: Delivering Continuous Comprehensive Testing through Testaify's AI-first testing platform.Testaify founder and COO Rafael E. Santos is a Stevie Award winner whose decades-long career includes strategic technology and product leadership roles. Rafael's goal for Testaify is to deliver comprehensive testing through Testaify's AI-first platform, which will change testing forever. Before Testaify, Rafael held executive positions at organizations like Ultimate Software and Trimble eBuilder.

Take the Next Step

Join the waitlist to be among the first to know when you can bring Testaify into your testing process.