This article was originally published in Testing Trapeze.
At testing meetups, workshops and conferences, context-driven testers are encouraging others to move away from relying solely on test cases. We reason that the prescribed use of test cases and test metrics is an inadequate approach to software testing. We believe that responsible testers must constantly adapt their approach to meet the current needs of the project.
After these events I hear feedback from attendees who are inspired, yet don’t feel that they could move away from using test cases on their project. Some of them anticipate objections from management, and struggle to come up with responses to explain this different testing approach with confidence. The objective of this article is to offer insight into performing software testing and progress reporting effectively, without relying on test cases, and to help you sell and defend this approach to your stakeholders.
Why not use test cases?
As a contract test specialist I’ve had the opportunity to work on a wide variety of projects. Sometimes I join at the start of a project, but more often I’m hired as the hands-on test manager of an in-flight project, with existing test processes, test cases, and a tight deadline.
All of the test cases I’ve ever inherited have been out-of-date – without exception. They were initially written based on requirements documents. As the project details changed, the time\effort needed to keep test cases updated was better spent on testing new features and bug fixes as they were developed. You might have noticed that your own test cases are out-of-date and time-consuming to maintain. Perhaps you’ve wondered if the time you spend re-writing test case steps would be better spent on other testing activities.
At their best, test cases state the obvious to testers who are already familiar with the product. At worst, test cases contain incorrect or obsolete details which lead new testers astray, and require updates which slow down experienced testers like the proverbial ball-and-chain. Let’s look at three common test activities, without the use of test cases: Test preparation, execution and reporting.
Test preparation
As with traditional projects, I map out a test strategy and/or test plan. What’s different is my approach to them. Instead of filling in templates with standard details common to all projects, I create a custom document or mind map to suit the project, using the Heuristic Test Strategy Model (HTSM) as a guide. I constantly reflect on my actions, and ask myself whether the document\section\paragraph is adding value to the project, and whether it is useful and necessary.
So where’s the harm in writing test cases during the test planning phase? The harm isn’t necessarily at the moment of creation; the cost of creating a test artefact also includes the cost of maintenance. Without maintenance, test cases quickly become outdated. Consider whether the information is already available somewhere else, for example, in a requirements document or user stories. It’s more efficient to keep information updated in a single location, rather than duplicating the information across artefacts and across teams.
Instead of spending substantial amounts of time creating difficult-to-maintain test cases, spend this time on other preparatory activities. Read all available documentation, get familiar with existing company products, explore earlier versions of the product and competitor offerings. Improve your product domain knowledge, learn about business processes – present and planned. Talk to developers about the technology stack. Talk to BAs about their design decisions. As you learn, jot down test ideas. This could be in the form of a mind map, checklists, test charters… anything lightweight which can be easily revised.
Test Execution
Working on more-or-less agile projects, I’m focused on testing new features as they’re developed, and keeping up with the pace of development. I’ve found that there’s never a shortage of things to be tested.
My main priorities while testing are to verify that explicit and implicit requirements are met, find bugs and report them. To do this effectively I use a form of structured exploratory testing called thread-based test management (TBTM). This is similar to session-based test management, but testing activities follow a set of ideas rather than being time-boxed. The majority of test ideas are created while testing. I have a Microsoft OneNote file per testing thread, with a new tab each time I retest a feature. Most of my testing threads are feature-based, corresponding to user stories.
When retesting bug fixes, I use a micro-TBTM approach. I treat each bug as a separate thread and enter my test ideas and results directly into the bug tracking tool as a comment. Anyone tracking that bug will be notified of my change, and can provide feedback or ask questions.
For example:
Passed testing in build 33, test environment 1.
- Postcode field is now populated automatically when suburb is selected – Pass
- Postcode cannot be manually entered or edited – Pass
- Still unable to proceed without entering a valid suburb – Pass
- Typo in validation message, raised separately Bug 2323
- Suburbs which are duplicated in other states “Silverdale” – Pass
- Suburbs containing a space in their name “Cockle Bay” – Pass
- Suburbs which have more than one postcode “Hamilton, NZ” – Pass
- AU/NZ suburb names containing non-alpha characters – Couldn’t easily locate any, not tested
- Tested in latest versions of Chrome, Firefox, IE and Safari
- Tested on desktop and mobile site
- Updated automated check for customer sign-up
TBTM is traceable, auditable and efficient. It’s great to also have [heuristics] checklists to compare test coverage with, particularly if you start running out of ideas for things to test. I find it helpful to run through my checklists prior to release, to gain confidence in my test coverage.
Reporting on test progress
A lot of my test reporting is verbal, for example, during daily standup meetings and in response to direct questions. I provide weekly test progress reports to stakeholders to reiterate these verbal reports, starting from the initial test preparation stages. The information in these weekly reports feeds into the test summary report at the end of the project.
I’ve learned a powerful secret in my quest to stop reporting on test-case metrics: Senior management appreciate bullet points almost as much as graphs! So I keep my test reports short, useful and relevant.
Below is an example weekly test status report, which lists up to five points each for the most important outstanding product issues, risks, tasks completed this week, obstacles\blockers to testing, and objectives for next week. Consider which headings you would add or remove for your project:
Top issues:
– Registration failing in Firefox – Bug 1234
– User remains logged in after performing logout – Bug 1235
– Unable to renew accounts before the account expiry date – Bug 1236
Risks:
– Performance test tool license expires next week, payment is still pending approval
Achieved this week:
– Focused on testing new features being developed: registration, login\logout and account page
– Briefly tested renewals and user profiles: no critical issues raised
– Performed thorough cross-browser and cross-device testing of front-end UI
Obstacles\blockers:
– 2 testers were sick this week
– Test environment data refresh delayed until next week
Objectives for next week:
– Continue testing newly developed features and fixes
– Review performance test plan
Any of these points can be a conversation starter, providing you with the opportunity to tell the testing story in more detail. Given the opportunity, you’ll be able to discuss coverage and depth of testing, and perceived product quality. Don’t feel constrained by the questions you’re being asked. For example:
Project manager: What percentage of test cases has passed?
Test lead: That’s not useful information. Here’s what you need to know…
Adding some metrics to your reports is a valid option. For example the ‘number of resolved bugs waiting to be retested’ or the ‘number of bugs raised this week by severity’. However, be aware when including metrics that you run the risk of readers misinterpreting the figures, or placing more importance on the metrics than the rest of the report.
I like to present test coverage and quality perceptions to stakeholders in a visual model, for example as a heat map. Heat maps can be created with mind mapping software but recently I’ve switched to using tables, primarily because stakeholders are more familiar with using spreadsheets and don’t require additional software to be installed on their computers. Here’s an example:

In this example, the green cells represent a working feature, red indicates feature not working, orange is a feature that works but has issues, grey is untested and white is unsupported. So, at a glance this heat map shows:
- which areas of the product have and haven’t been tested;
- a list of supported features per platform;
- test team impression of quality per feature; and
- the corresponding issues or bugs.
Introducing context-driven testing
At this point you may have noticed something interesting: all of the above methods can be used alongside test cases. Sometimes this is how I introduce context-driven testing to a team; gradually. Each time, my team has found that as the project progresses, there is less and less value in referring back to their old test cases.
A few weeks after making the switch to context-driven testing on one such project, some managers became nervous that we might be missing something by not executing the test cases. The other testers and I started to wonder if they might be right, so we tried an experiment. Half of the team spent the next three days executing test cases, and the other half of the team continued with structured exploratory testing. We met at the end of the three days to review our findings.
Even I was surprised to find that the testers executing test cases had not found a single new bug. They felt frustrated and constrained by the outdated test cases, and had put pressure on themselves to get through the test cases as fast as possible. A large chunk of their time was spent looking up existing bugs they had raised, in order to add the issue number reference to their failed test case results. We also realised that they had hardly spoken to a developer or business representative.
Less surprisingly, the half of the team performing structured exploratory testing found the exact opposite. They had continued to work closely with the project team, raising bugs and progressing with feature testing coverage. After that we agreed to leave the test cases in favour of using the business requirements document as a checklist, to confirm whether we’d missed testing any important product features.
Conclusion
There is no one-size-fits-all “best-practice” approach to software testing. I have implemented a new context-driven testing approach for each project I’ve worked on in the last 18 months, across different teams, companies and industries. I’ve earned the respect of project managers, testers, developers and business stakeholders. The turning point for me came after Let’s Test Oz 2014, when I took Fiona Charles’ words to heart – “I’m not going to do bad work”.
The first time I refused to use existing test cases I was nervous. I knew the principles behind context-driven testing and I was keen to implement change. I also felt that I was going out on a limb and putting my professional reputation on the line. Yet it was surprisingly easy to demonstrate the value of context-driven testing approaches to stakeholders, and to gain their support.
Convincing teams to adopt context-driven testing approaches became easier over time. As I built up a portfolio of success stories my confidence improved. I don’t expect junior testers to have this capacity for change within existing project teams. The responsibility lies with test managers and test leads to demonstrate to project stakeholders what software testing really involves.
Context-driven testing is not easy and it’s not boring. It’s challenging, rewarding and – best of all – it’s a more effective way to test. Testers following these principles are more motivated, engaged, valued and respected.
Remember that testing is about providing valuable information, not just the percentage of passed test cases.
With thanks for inspiration:
Like this:
Like Loading...