Category Archives: Learning

Coaching Testers vs. Developers

After many years leading and coaching testers, for the past 6 months I’ve been working with developers as a quality coach. I’m keen to share lessons learned from my experience so far and to learn from others working in Quality Assistance and Quality Coaching roles.

What’s different?


Testers are eager to learn more about testing, it’s their passion. The “pull method” of providing coaching and training on demand worked very well, where I made it known that I’m available to the team and they approached me to arrange sessions. Testers reached out to ask for advice, and I was invited to meetings, asked to consult or collaborate, or asked to review work. In some cases my reputation in the testing community has preceded me to a new role, and testers were keen to collaborate from the moment I joined the company.

Great developers are also open to learning more about testing, but the word ‘enthusiastic’ doesn’t spring to mind. If I was to rely solely on the pull method of waiting to be asked for coaching, I would have a lot of spare time on my hands. Instead I attend standup meetings, planning sessions and retros, and I look for opportunities to propose coaching and pairing sessions. It requires me to be more confident and assertive, and somewhat persistent. It’s fair to say that my reputation doesn’t precede me with developers!

Arranging the first session with each developer is the biggest hurdle. The quality assistance/coaching model is relatively new for most people, and there’s uncertainty around what will be involved. As with most things, it gets easier each time.

Training method

For a team of testers, I could present three new concepts in three weeks. For example: mind-mapping test scenarios, focus/defocus exploratory testing technique, and consistency heuristics. Testers love learning these techniques, and incorporating them into their regular work. I picture them adding new tools to their toolbox, to help them find quality issues more efficiently and effectively. The value and usage of each new tool is self-apparent, meaning it’s obvious to the testers where and when each new concept will be useful.

Working with developers, any coaching on testing methods will be more successful if demonstrated and proven in the context of the code/feature/product they’re currently working on. I’ve found that I’m not able to update and reuse my existing presentations and materials which are theory-based and use practise testing websites. It’s almost as though the method you’re demonstrating needs to find a bug in your own product during the training session, to be considered a technique worth learning. I picture developers with a small extra toolbox for testing that’s already full, and they’re not convinced yet they need to purchase a larger toolbox!

My own expertise

When doing one-on-one coaching I learn new things every single time, such as domain knowledge, keyboard shortcuts, useful browser extensions, etc. Even so, I’m typically guiding more than I am learning.

While coaching developers it feels more like a two-way learning process. While asking leading questions about quality, scope, and risk, I’m also learning about the operating environment, debugging methods, code structure, in-house test tools, and more. Importantly, I’m seeing how many bugs can be traced back to code patterns and structure, and can therefore be prevented at that level during development. It’s exciting to learn new ways of preventing issues rather than detecting them later.

Which is better?

When I’m working with testers I automatically assume the role of mentor, trainer, leader. For years this has been my comfort zone – testers are my tribe, my people, my community. Personally and professionally, I thrive when I’m just outside my comfort zone. In the past that has lead me to pursue consulting, contracting, senior management roles, public speaking, organising meetups, hosting training courses… Now taking on this new role has allowed me to use my experience in the software industry with quality and coaching, while pushing me to learn about development more deeply.

I’m hopeful that by coaching developers I can have a greater impact on preventing/reducing product quality issues and therefore help to produce quality software faster. It’s still early days.

Please reach out if you’re also in a Quality Coach role and would like to share notes.

Part 1 in a short series of posts about Quality Coaching.

1 Comment

Posted by on June 13, 2021 in Coaching, Growth, Learning, Quality


Tags: , , , ,

What Does Testing Look Like Without Test Cases?

This article was originally published in Testing Trapeze.

At testing meetups, workshops and conferences, context-driven testers are encouraging others to move away from relying solely on test cases. We reason that the prescribed use of test cases and test metrics is an inadequate approach to software testing. We believe that responsible testers must constantly adapt their approach to meet the current needs of the project.

After these events I hear feedback from attendees who are inspired, yet don’t feel that they could move away from using test cases on their project. Some of them anticipate objections from management, and struggle to come up with responses to explain this different testing approach with confidence. The objective of this article is to offer insight into performing software testing and progress reporting effectively, without relying on test cases, and to help you sell and defend this approach to your stakeholders.

Why not use test cases?
As a contract test specialist I’ve had the opportunity to work on a wide variety of projects. Sometimes I join at the start of a project, but more often I’m hired as the hands-on test manager of an in-flight project, with existing test processes, test cases, and a tight deadline.

All of the test cases I’ve ever inherited have been out-of-date – without exception. They were initially written based on requirements documents. As the project details changed, the time\effort needed to keep test cases updated was better spent on testing new features and bug fixes as they were developed. You might have noticed that your own test cases are out-of-date and time-consuming to maintain. Perhaps you’ve wondered if the time you spend re-writing test case steps would be better spent on other testing activities.

At their best, test cases state the obvious to testers who are already familiar with the product. At worst, test cases contain incorrect or obsolete details which lead new testers astray, and require updates which slow down experienced testers like the proverbial ball-and-chain. Let’s look at three common test activities, without the use of test cases: Test preparation, execution and reporting.

Test preparation
As with traditional projects, I map out a test strategy and/or test plan. What’s different is my approach to them. Instead of filling in templates with standard details common to all projects, I create a custom document or mind map to suit the project, using the Heuristic Test Strategy Model (HTSM) as a guide. I constantly reflect on my actions, and ask myself whether the document\section\paragraph is adding value to the project, and whether it is useful and necessary.

So where’s the harm in writing test cases during the test planning phase? The harm isn’t necessarily at the moment of creation; the cost of creating a test artefact also includes the cost of maintenance. Without maintenance, test cases quickly become outdated. Consider whether the information is already available somewhere else, for example, in a requirements document or user stories. It’s more efficient to keep information updated in a single location, rather than duplicating the information across artefacts and across teams.

Instead of spending substantial amounts of time creating difficult-to-maintain test cases, spend this time on other preparatory activities. Read all available documentation, get familiar with existing company products, explore earlier versions of the product and competitor offerings. Improve your product domain knowledge, learn about business processes – present and planned. Talk to developers about the technology stack. Talk to BAs about their design decisions. As you learn, jot down test ideas. This could be in the form of a mind map, checklists, test charters… anything lightweight which can be easily revised.

Test Execution
Working on more-or-less agile projects, I’m focused on testing new features as they’re developed, and keeping up with the pace of development. I’ve found that there’s never a shortage of things to be tested.

My main priorities while testing are to verify that explicit and implicit requirements are met, find bugs and report them. To do this effectively I use a form of structured exploratory testing called thread-based test management (TBTM). This is similar to session-based test management, but testing activities follow a set of ideas rather than being time-boxed. The majority of test ideas are created while testing. I have a Microsoft OneNote file per testing thread, with a new tab each time I retest a feature. Most of my testing threads are feature-based, corresponding to user stories.

When retesting bug fixes, I use a micro-TBTM approach. I treat each bug as a separate thread and enter my test ideas and results directly into the bug tracking tool as a comment. Anyone tracking that bug will be notified of my change, and can provide feedback or ask questions.

For example:

Passed testing in build 33, test environment 1.

  • Postcode field is now populated automatically when suburb is selected – Pass
  • Postcode cannot be manually entered or edited – Pass
  • Still unable to proceed without entering a valid suburb – Pass
  • Typo in validation message, raised separately Bug 2323
  • Suburbs which are duplicated in other states “Silverdale” – Pass
  • Suburbs containing a space in their name “Cockle Bay” – Pass
  • Suburbs which have more than one postcode “Hamilton, NZ” – Pass
  • AU/NZ suburb names containing non-alpha characters – Couldn’t easily locate any, not tested
  • Tested in latest versions of Chrome, Firefox, IE and Safari
  • Tested on desktop and mobile site
  • Updated automated check for customer sign-up

TBTM is traceable, auditable and efficient. It’s great to also have [heuristics] checklists to compare test coverage with, particularly if you start running out of ideas for things to test. I find it helpful to run through my checklists prior to release, to gain confidence in my test coverage.

Reporting on test progress
A lot of my test reporting is verbal, for example, during daily standup meetings and in response to direct questions. I provide weekly test progress reports to stakeholders to reiterate these verbal reports, starting from the initial test preparation stages. The information in these weekly reports feeds into the test summary report at the end of the project.

I’ve learned a powerful secret in my quest to stop reporting on test-case metrics: Senior management appreciate bullet points almost as much as graphs! So I keep my test reports short, useful and relevant.

Below is an example weekly test status report, which lists up to five points each for the most important outstanding product issues, risks, tasks completed this week, obstacles\blockers to testing, and objectives for next week. Consider which headings you would add or remove for your project:

Top issues:
Registration failing in Firefox – Bug 1234
User remains logged in after performing logout – Bug 1235
– Unable to renew accounts before the account expiry date – Bug 1236

Performance test tool license expires next week, payment is still pending approval

Achieved this week:
– Focused on testing new features being developed: registration, login\logout and account page
– Briefly tested renewals and user profiles: no critical issues raised
– Performed thorough cross-browser and cross-device testing of front-end UI

– 2 testers were sick this week
– Test environment data refresh delayed until next week

Objectives for next week:
– Continue testing newly developed features and fixes
– Review performance test plan

Any of these points can be a conversation starter, providing you with the opportunity to tell the testing story in more detail. Given the opportunity, you’ll be able to discuss coverage and depth of testing, and perceived product quality. Don’t feel constrained by the questions you’re being asked. For example:

Project manager: What percentage of test cases has passed?
Test lead: That’s not useful information. Here’s what you need to know…

Adding some metrics to your reports is a valid option. For example the ‘number of resolved bugs waiting to be retested’ or the ‘number of bugs raised this week by severity’. However, be aware when including metrics that you run the risk of readers misinterpreting the figures, or placing more importance on the metrics than the rest of the report.

I like to present test coverage and quality perceptions to stakeholders in a visual model, for example as a heat map. Heat maps can be created with mind mapping software but recently I’ve switched to using tables, primarily because stakeholders are more familiar with using spreadsheets and don’t require additional software to be installed on their computers. Here’s an example:

heatmap table

In this example, the green cells represent a working feature, red indicates feature not working, orange is a feature that works but has issues, grey is untested and white is unsupported. So, at a glance this heat map shows:

  • which areas of the product have and haven’t been tested;
  • a list of supported features per platform;
  • test team impression of quality per feature; and
  • the corresponding issues or bugs.

Introducing context-driven testing
At this point you may have noticed something interesting: all of the above methods can be used alongside test cases. Sometimes this is how I introduce context-driven testing to a team; gradually. Each time, my team has found that as the project progresses, there is less and less value in referring back to their old test cases.

A few weeks after making the switch to context-driven testing on one such project, some managers became nervous that we might be missing something by not executing the test cases. The other testers and I started to wonder if they might be right, so we tried an experiment. Half of the team spent the next three days executing test cases, and the other half of the team continued with structured exploratory testing. We met at the end of the three days to review our findings.

Even I was surprised to find that the testers executing test cases had not found a single new bug. They felt frustrated and constrained by the outdated test cases, and had put pressure on themselves to get through the test cases as fast as possible. A large chunk of their time was spent looking up existing bugs they had raised, in order to add the issue number reference to their failed test case results. We also realised that they had hardly spoken to a developer or business representative.

Less surprisingly, the half of the team performing structured exploratory testing found the exact opposite. They had continued to work closely with the project team, raising bugs and progressing with feature testing coverage. After that we agreed to leave the test cases in favour of using the business requirements document as a checklist, to confirm whether we’d missed testing any important product features.

There is no one-size-fits-all “best-practice” approach to software testing. I have implemented a new context-driven testing approach for each project I’ve worked on in the last 18 months, across different teams, companies and industries. I’ve earned the respect of project managers, testers, developers and business stakeholders. The turning point for me came after Let’s Test Oz 2014, when I took Fiona Charles’ words to heart – “I’m not going to do bad work”.

The first time I refused to use existing test cases I was nervous. I knew the principles behind context-driven testing and I was keen to implement change. I also felt that I was going out on a limb and putting my professional reputation on the line. Yet it was surprisingly easy to demonstrate the value of context-driven testing approaches to stakeholders, and to gain their support.

Convincing teams to adopt context-driven testing approaches became easier over time. As I built up a portfolio of success stories my confidence improved. I don’t expect junior testers to have this capacity for change within existing project teams. The responsibility lies with test managers and test leads to demonstrate to project stakeholders what software testing really involves.

Context-driven testing is not easy and it’s not boring. It’s challenging, rewarding and – best of all – it’s a more effective way to test. Testers following these principles are more motivated, engaged, valued and respected.

Remember that testing is about providing valuable information, not just the percentage of passed test cases.

With thanks for inspiration:


Posted by on June 29, 2017 in Learning, Software Testing


Rapid Software Testing – Reading Recommendations

Having just completed Rapid Software Testing twice in two weeks with James Bach, I’m feeling motivated and inspired to continue learning.

Here’s a list of books recommended by James during the course. These will enhance your skills and change the way you look at testing.


The first book may be the most important, and the most difficult to read. I’m still getting through my copy. The content is excellent, and there’s a lot to take in.
The next 4 books are real page-turners, explaining important and complex information is a way that’s enjoyable to read.
I haven’t yet read the last book on this list.

An Introduction to General Systems Thinking by Gerald Weinberg
Thinking, Fast and Slow by Daniel Kahneman
Tacit and Explicit Knowledge by Harry Collins
Lessons Learned in Software Testing: A Context-Driven Approach by Cem Kaner, James Bach, Bret Pettichord
The Secrets of Consulting: A Guide to Giving and Getting Advice Successfully by Gerald Weinberg
Discussion of the Method: Conducting the Engineer’s Approach to Problem Solving by Billy Koen

If you’ve already read these books, I’m interested to hear your thoughts. For example, what was the biggest takeaway you got from each book, and how has that helped you with software testing?


Posted by on August 14, 2015 in Learning, Software Testing


Tags: , , , ,

%d bloggers like this: