RSS

Category Archives: Software Testing

What Does Testing Look Like Without Test Cases?

This article was originally published in Testing Trapeze.

At testing meetups, workshops and conferences, context-driven testers are encouraging others to move away from relying solely on test cases. We reason that the prescribed use of test cases and test metrics is an inadequate approach to software testing. We believe that responsible testers must constantly adapt their approach to meet the current needs of the project.

After these events I hear feedback from attendees who are inspired, yet don’t feel that they could move away from using test cases on their project. Some of them anticipate objections from management, and struggle to come up with responses to explain this different testing approach with confidence. The objective of this article is to offer insight into performing software testing and progress reporting effectively, without relying on test cases, and to help you sell and defend this approach to your stakeholders.

Why not use test cases?
As a contract test specialist I’ve had the opportunity to work on a wide variety of projects. Sometimes I join at the start of a project, but more often I’m hired as the hands-on test manager of an in-flight project, with existing test processes, test cases, and a tight deadline.

All of the test cases I’ve ever inherited have been out-of-date – without exception. They were initially written based on requirements documents. As the project details changed, the time\effort needed to keep test cases updated was better spent on testing new features and bug fixes as they were developed. You might have noticed that your own test cases are out-of-date and time-consuming to maintain. Perhaps you’ve wondered if the time you spend re-writing test case steps would be better spent on other testing activities.

At their best, test cases state the obvious to testers who are already familiar with the product. At worst, test cases contain incorrect or obsolete details which lead new testers astray, and require updates which slow down experienced testers like the proverbial ball-and-chain. Let’s look at three common test activities, without the use of test cases: Test preparation, execution and reporting.

Test preparation
As with traditional projects, I map out a test strategy and/or test plan. What’s different is my approach to them. Instead of filling in templates with standard details common to all projects, I create a custom document or mind map to suit the project, using the Heuristic Test Strategy Model (HTSM) as a guide. I constantly reflect on my actions, and ask myself whether the document\section\paragraph is adding value to the project, and whether it is useful and necessary.

So where’s the harm in writing test cases during the test planning phase? The harm isn’t necessarily at the moment of creation; the cost of creating a test artefact also includes the cost of maintenance. Without maintenance, test cases quickly become outdated. Consider whether the information is already available somewhere else, for example, in a requirements document or user stories. It’s more efficient to keep information updated in a single location, rather than duplicating the information across artefacts and across teams.

Instead of spending substantial amounts of time creating difficult-to-maintain test cases, spend this time on other preparatory activities. Read all available documentation, get familiar with existing company products, explore earlier versions of the product and competitor offerings. Improve your product domain knowledge, learn about business processes – present and planned. Talk to developers about the technology stack. Talk to BAs about their design decisions. As you learn, jot down test ideas. This could be in the form of a mind map, checklists, test charters… anything lightweight which can be easily revised.

Test Execution
Working on more-or-less agile projects, I’m focused on testing new features as they’re developed, and keeping up with the pace of development. I’ve found that there’s never a shortage of things to be tested.

My main priorities while testing are to verify that explicit and implicit requirements are met, find bugs and report them. To do this effectively I use a form of structured exploratory testing called thread-based test management (TBTM). This is similar to session-based test management, but testing activities follow a set of ideas rather than being time-boxed. The majority of test ideas are created while testing. I have a Microsoft OneNote file per testing thread, with a new tab each time I retest a feature. Most of my testing threads are feature-based, corresponding to user stories.

When retesting bug fixes, I use a micro-TBTM approach. I treat each bug as a separate thread and enter my test ideas and results directly into the bug tracking tool as a comment. Anyone tracking that bug will be notified of my change, and can provide feedback or ask questions.

For example:

Passed testing in build 33, test environment 1.

  • Postcode field is now populated automatically when suburb is selected – Pass
  • Postcode cannot be manually entered or edited – Pass
  • Still unable to proceed without entering a valid suburb – Pass
  • Typo in validation message, raised separately Bug 2323
  • Suburbs which are duplicated in other states “Silverdale” – Pass
  • Suburbs containing a space in their name “Cockle Bay” – Pass
  • Suburbs which have more than one postcode “Hamilton, NZ” – Pass
  • AU/NZ suburb names containing non-alpha characters – Couldn’t easily locate any, not tested
  • Tested in latest versions of Chrome, Firefox, IE and Safari
  • Tested on desktop and mobile site
  • Updated automated check for customer sign-up

TBTM is traceable, auditable and efficient. It’s great to also have [heuristics] checklists to compare test coverage with, particularly if you start running out of ideas for things to test. I find it helpful to run through my checklists prior to release, to gain confidence in my test coverage.

Reporting on test progress
A lot of my test reporting is verbal, for example, during daily standup meetings and in response to direct questions. I provide weekly test progress reports to stakeholders to reiterate these verbal reports, starting from the initial test preparation stages. The information in these weekly reports feeds into the test summary report at the end of the project.

I’ve learned a powerful secret in my quest to stop reporting on test-case metrics: Senior management appreciate bullet points almost as much as graphs! So I keep my test reports short, useful and relevant.

Below is an example weekly test status report, which lists up to five points each for the most important outstanding product issues, risks, tasks completed this week, obstacles\blockers to testing, and objectives for next week. Consider which headings you would add or remove for your project:

Top issues:
Registration failing in Firefox – Bug 1234
User remains logged in after performing logout – Bug 1235
– Unable to renew accounts before the account expiry date – Bug 1236

Risks:
Performance test tool license expires next week, payment is still pending approval

Achieved this week:
– Focused on testing new features being developed: registration, login\logout and account page
– Briefly tested renewals and user profiles: no critical issues raised
– Performed thorough cross-browser and cross-device testing of front-end UI

Obstacles\blockers:
– 2 testers were sick this week
– Test environment data refresh delayed until next week

Objectives for next week:
– Continue testing newly developed features and fixes
– Review performance test plan

Any of these points can be a conversation starter, providing you with the opportunity to tell the testing story in more detail. Given the opportunity, you’ll be able to discuss coverage and depth of testing, and perceived product quality. Don’t feel constrained by the questions you’re being asked. For example:

Project manager: What percentage of test cases has passed?
Test lead: That’s not useful information. Here’s what you need to know…

Adding some metrics to your reports is a valid option. For example the ‘number of resolved bugs waiting to be retested’ or the ‘number of bugs raised this week by severity’. However, be aware when including metrics that you run the risk of readers misinterpreting the figures, or placing more importance on the metrics than the rest of the report.

I like to present test coverage and quality perceptions to stakeholders in a visual model, for example as a heat map. Heat maps can be created with mind mapping software but recently I’ve switched to using tables, primarily because stakeholders are more familiar with using spreadsheets and don’t require additional software to be installed on their computers. Here’s an example:

heatmap table

In this example, the green cells represent a working feature, red indicates feature not working, orange is a feature that works but has issues, grey is untested and white is unsupported. So, at a glance this heat map shows:

  • which areas of the product have and haven’t been tested;
  • a list of supported features per platform;
  • test team impression of quality per feature; and
  • the corresponding issues or bugs.

Introducing context-driven testing
At this point you may have noticed something interesting: all of the above methods can be used alongside test cases. Sometimes this is how I introduce context-driven testing to a team; gradually. Each time, my team has found that as the project progresses, there is less and less value in referring back to their old test cases.

A few weeks after making the switch to context-driven testing on one such project, some managers became nervous that we might be missing something by not executing the test cases. The other testers and I started to wonder if they might be right, so we tried an experiment. Half of the team spent the next three days executing test cases, and the other half of the team continued with structured exploratory testing. We met at the end of the three days to review our findings.

Even I was surprised to find that the testers executing test cases had not found a single new bug. They felt frustrated and constrained by the outdated test cases, and had put pressure on themselves to get through the test cases as fast as possible. A large chunk of their time was spent looking up existing bugs they had raised, in order to add the issue number reference to their failed test case results. We also realised that they had hardly spoken to a developer or business representative.

Less surprisingly, the half of the team performing structured exploratory testing found the exact opposite. They had continued to work closely with the project team, raising bugs and progressing with feature testing coverage. After that we agreed to leave the test cases in favour of using the business requirements document as a checklist, to confirm whether we’d missed testing any important product features.

Conclusion
There is no one-size-fits-all “best-practice” approach to software testing. I have implemented a new context-driven testing approach for each project I’ve worked on in the last 18 months, across different teams, companies and industries. I’ve earned the respect of project managers, testers, developers and business stakeholders. The turning point for me came after Let’s Test Oz 2014, when I took Fiona Charles’ words to heart – “I’m not going to do bad work”.

The first time I refused to use existing test cases I was nervous. I knew the principles behind context-driven testing and I was keen to implement change. I also felt that I was going out on a limb and putting my professional reputation on the line. Yet it was surprisingly easy to demonstrate the value of context-driven testing approaches to stakeholders, and to gain their support.

Convincing teams to adopt context-driven testing approaches became easier over time. As I built up a portfolio of success stories my confidence improved. I don’t expect junior testers to have this capacity for change within existing project teams. The responsibility lies with test managers and test leads to demonstrate to project stakeholders what software testing really involves.

Context-driven testing is not easy and it’s not boring. It’s challenging, rewarding and – best of all – it’s a more effective way to test. Testers following these principles are more motivated, engaged, valued and respected.

Remember that testing is about providing valuable information, not just the percentage of passed test cases.

With thanks for inspiration:

 
4 Comments

Posted by on June 29, 2017 in Learning, Software Testing

 

One Tester’s Account of an iOS Update

One Tester’s Account of an iOS Update

As a tester I’m well aware that things can go wrong during software updates. I always wait at least a week before upgrading iOS on my iPhone, leaving time for issues to be reported and fixed. When I run the upgrade, I’m expecting something to go wrong. This means that I always backup my phone first – luckily!

 

During the latest upgrade, I saw this on my iPhone:

 

iphone6-ios9-recovery-mode-screen

I’d been expecting something to wrong and my iPhone cable is looking a bit worse-for-wear these days, so I assumed that my phone had been disconnected. Doesn’t that screen look like it’s asking me to plug in my phone?

 

Naturally I unplugged and plugged in the cable again to restore the connection.

 

In hindsight I think that image means “iTunes is updating your phone, do not under any circumstances unplug the cable”. From a usability perspective, this image is ambiguous at best. Poor usability combined with my pessimism caused the update to fail.

 

I’ve had only one iOS update fail in the past, around 2 years ago, yet since then I pretty much expect them all to fail. A catastrophic failure can leave a lasting impression of poor software quality.

 

A popup dialog said that I needed to run the Restore next which would clear all of my phone settings and restore from backup. That sounded straightforward to me, because I’ve restored my phone in the past with no issues (due to hardware upgrades and replacements). Based only on my personal experience, my impression was that the Restore feature is more reliable than the Upgrade feature. In fact, the upgrade has failed for me just once out of many uses, while the restore has always been successful but I’ve only used it four times.

Then the restore from backup failed, with Unknown error 21.

 

error_tweet

The dialog with the unhelpful wording led to a website with clear instructions. Some steps were less helpful than others, e.g. Upgrade Windows and reboot your PC.

 

After following the steps I tried again, and this time the Restore process looked like it was working.

 

Then iTunes asked for my backup password. I’d just created a backup this morning and hadn’t been asked to give it a password…? I tried my Apple ID password, with no luck.

 

I found an Apple forum post where someone was complaining about having no way to recover the password. The Apple fan club had gone the offensive, “How could you forget your password? This is your own fault!”. The defence was, “How can I be expected to remember my password from years ago when I bought my first iPhone?”

 

Aha! A clue… I tried one of my old passwords from days gone by and it worked! The person complaining was more help to me than the Apple-defenders.

 

My iPhone is now restoring from backup and I’m breathing a sigh of relief. I feel like it could be smooth sailing from here, and there’s only 22 mins remaining.

Now 27 minutes.

Now 34 minutes.

 

Hmm…

 
Leave a comment

Posted by on July 30, 2016 in Software Testing

 

WeTestAKL Meetup 99-sec Talks

Our July WeTest Auckland Meetup came with this warning:

If you attend this meetup, you will be speaking!

Inspired by TestBash, Shirley Tricker organised a meetup where everyone was required to speak.. but only for 99 seconds!

We started with two 25-minute talks from first-time speakers, on mobile test automation and communication skills for testers. Then we launched into the 99-sec talks.

Shirley chose four people at a time to come to the front and speak for 99 seconds each, to a group of 16 testers. Some people had clearly practiced their timings, delivering their closing sentences just seconds before the buzzer. Many more were surprised when the buzzer sounded mid-sentence!

I’m so impressed that everyone who came along spoke: including people who’d made it off the waiting list just one hour before the meetup; some first-time attendees; plenty of first-time speakers; and those who hadn’t read the Meetup properly and didn’t realise that they needed to talk!

 

Lauren Young- What is a Statement of Work?

Kasturi Rajaram- Why I love Non-Functional Testing

Rasha Taher – HiPPO: Highly Paid Person’s Opinion

Sandy Lochhead- Child’s Play, the Value of Playing Games

Nadiia Syvakivska – Luck

James Espie – Maintaining Perspective

Reshma Mashere – Listening Skills and Meditation

Monika Vishwakarma – Thank You WeTest

Bede Ngaruko – Fear of Failure

Jennifer Haywood – Mobile Testing Mnemonics

Inderdip Vraich – Specflow

Nav Brar – My Journey to NZ

Shirley Tricker – Career Tips

Laurence Burrows – Raising Defects

Jennifer Hurrell – Documenting Exploratory Testing

Kim Engel – #30DaysOfTesting Challenge

Thank you Shirley for creating such a friendly and fun environment where we could share and learn. And thank you to our sponsors for the evening, Assurity.

* More details on the 99-sec format:
http://scottberkun.com/2012/99-second-presentations/
http://www.ministryoftesting.com/2015/06/99-second-talks-go-virtual/
 
1 Comment

Posted by on July 30, 2016 in Software Testing

 

As a software tester, do I need to learn about automation?

Testers regularly ask, “Do I need to learn automation skills?”.

Let’s step back for a minute and put this another way, “Do I need to learn how to drive a car?”.

Well you could walk, or take the bus, or pay a taxi\Uber driver to drive you around. These are all valid choices. But there’s a big advantage in being able to drive, for times when driving is the best option.

So my answer is no, you don’t need to learn automation skills, but having those skills will let you make informed decisions about the most efficient way to approach each testing task, with a wider range of options available to you.

Consider for a moment, what has stopped you from learning hands-on automation skills before now?

bus.JPG

Ignore the Fear

When testers ask me whether they’ll need to learn automation skills, there’s an implied reluctance or fear behind the question. Sometimes it’s a fear of being left behind in the job market, or of being replaced by testers with stronger technical skills. In a few cases the testers asking seemed reluctant to invest time in learning new skills unnecessarily. I have a lot more time for the first group of people, compared to the second group.

For the testers who ask this question, are you nervous, intimidated or overwhelmed when it comes to learning automation skills to help with software testing? For now, let’s pretend that you’re not. Imagine instead that you’re completely capable of writing an automation script (because you are). How can you get started?

Avoid Analysis Paralysis

There is so much information online that we’re presented with an overwhelming number of options and learning resources. I got stuck while deciding which programming language to learn (COBOL has gone out of fashion since my development days).

I had finally decided on Python – I can’t remember why now – and had completed the first few tutorials, when others convinced me I should be learning Java. Or C#, or Ruby… So I stopped doing the Python tutorials, and unfortunately it was a while before I decided to try again.

I’ve since learned that it simply doesn’t matter. My learning experience has driven my approach for this post. The key thing to remember is that once you learn the basics of an object-oriented programming language, and an automation tool, it gets much easier to learn the next one. They all have a lot in common.

To help you stop procrastinating, and to make sense of all the options out there, the rest of this post is very straightforward. My goal is to get testers to learn some automation, to complement their existing skill sets.

laptop.JPG

Take the First Step

Learn Java, using this website to get you started: http://www.tutorialspoint.com/java/index.htm

(The time you may be tempted to spend on researching the best website\tutorial to use could be spent learning Java instead. )

You know your own preferred learning style better than I do, I just have a few words of general advice.

  1. Start from the beginning.
  2. Follow the instructions to set up your computer, and try to overcome any perceived obstacles while you’re getting started.

i.e. if you get stuck, ask for help.

  1. You won’t need to memorise anything. Yes, the maximum “int” value is 2,147,483,647. And no, you won’t need to remember that number. Once you’ve read the course information, you’ll know where to find it later if you need it again.
  2. It’s okay if some things don’t make sense right away. “Float data type is a single-precision 32-bit IEEE 754 floating point”. I don’t understand most of that sentence, yet I can write automation scripts.

 Java tutorial.JPG

You don’t need to complete all of the lessons unless you want to. You’re not trying to become a developer, you need to know ‘just enough’ to be getting on with for now. The course material will still be there when you need to deepen your knowledge of Java, down the track.

There are no stupid questions. Anything you want to ask has probably already been asked and answered on http://stackoverflow.com/. If you can’t find an answer there please feel free to post your question. You’ll be doing a huge favour to plenty of other beginners like yourself, who will benefit from reading the responses in the future.

Take the Next Step

So, what’s next?

Pick a tool, any tool.. If your test team aren’t currently using any automation tools and you’re stuck for ideas, check out Ghost Inspector https://ghostinspector.com/. There’s a free trial and you can quickly automate some website checks. Ghost Inspector is like Selenium, but it’s hosted on the cloud with a pretty good GUI, and a record and playback feature, making it easy to get started with.

ghost inspector

Start by using a non-production version of the website you’re currently testing, or use a sample website created specifically for learning to use automation tools. For example, http://phptravels.com/demo/.

The first test scripts you create will fall into all the traps inherent with using record and playback tools, and that’s okay! Because you’ll be automating, and you’ll be learning, and that’s an excellent start.

From here you’ll gain confidence to start\join a conversation about automation, you’ll have specific questions to ask, you’ll be able to research online independently, and continue to learn new automation skills.

 demo site.JPG

It’s Okay to Fail

They say in Silicon Valley, “Fail fast, fail often”. The same applies here.

Learn the hard way by diving in, getting started, making mistakes, and improving as you go. The worst thing you could do is read every “Lesson learned in test automation” article on the internet before learning to use a single automation tool.

Start Right Now

Get started with an online Java course right now – http://www.tutorialspoint.com/java/index.htm.

If you prefer hands-on classroom-style learning, at Engel Consulting we have Introductory Java and Selenium courses available in Auckland, with more courses coming soon. There may be also local training providers in your area, or experienced testers willing to provide training on request.

Also worth a mention is Mike Talks’ blog post series on automation: http://testsheepnz.blogspot.co.nz/2016/06/automation-1-guide.html

 
7 Comments

Posted by on July 24, 2016 in Software Testing

 

Tags:

What Are Test Oracles?

What Are Test Oracles?

How do you recognise that something you’ve seen may be a bug?

What do you do when your test results don’t match the expected results?

For our WeTest Meetup discussion we started with a working definition of an oracle as “a source of authoritative information”.

We rarely rely on just one source of information when determining how a product should work. For example, when working from a list of written requirements we may also check how the previous version of the product works, to further clarify our understanding.

Two authoritative sources can contradict each other. For example, written requirements may not match the design documents. In that case, how do we decide which oracle to use?

Stakeholder decisions during product development can be a compromise between desired functionality and a practical solution. How often is documentation updated to reflect these ever-changing decisions? In this situation, is ‘product team consensus’ considered to be an oracle?

An oracle can be incomplete. In fact, we struggled to think of a real-world example where an oracle would be complete and correct…

Rapid Software Testing (RST) takes a broader definition of an oracle:

“An oracle is a means by which we recognise a problem when it happens during testing” – James Bach and Michael Bolton, Rapid Software Testing.

Rasha Taher brought along this RST diagram on oracles to share with the group:

OracleQuadrants

Our authoritative oracles all fall under the Reference category in this model. They are external, explicit oracles. This model opened up a whole new perspective to the discussion. We started to brainstorm other oracles we use every day, without necessarily realising that we’re using them.

After talking through this model as a group, we felt more conscious of using our own experience and feelings as oracles while testing. Did the behaviour of a certain feature make you feel confused? You may have found a usability issue, for example. Once you’ve worked out the cause of your confusion, consider whether users may encounter the same problem, particularly new users.  Will they be provided with more or less training than you received as a tester? Is there a way for users to overcome this hurdle, without needing to personally ask the developer and product owner how the feature should work?

Conference oracles highlight the importance of communication in software testing. With open and frequent communication testers can gain a clearer picture of stakeholders’ expectations of the product. This can help guide testing, to determine how well the product meets those expectations.

While discussing Inference oracles, Ram Malapati led us to the FEW HICCUPPS heuristic. These are a whole topic by themselves, and could be the main topic for our next discussion group!

Through learning the term ‘test oracles’ and reviewing this model, we feel more empowered and in control of our test approach. While we may not use the term oracles at the office, just being aware of the oracles we use daily can improve both our approach and our confidence.

Finally, learning the term ‘test oracles’ opens up a new avenue of research on software testing methods, to learn more about oracles and how we can use them.

Further reading:
As Expected – Michael Bolton
Oracles from the Inside Out – Michael Bolton
What Testers Find – James Bach

 
Leave a comment

Posted by on July 24, 2016 in Software Testing

 

Listen to Your First Testing Podcast

Listen to Your First Testing Podcast

This month I’m joining many members of the online software testing community in the 30 Day Testing Challenge.

The challenge for Day 3 is “Listen to a testing podcast”.

Testers know that rules are made to be broken… I was planning to watch a testing talk on YouTube instead. When my colleague Ram insisted that I should actually listen to a podcast, I had to confess that I’ve never listened to a podcast before.

Here’s what I’ve learned about how to get started. Unsurprisingly really, it’s easy!

I’ve included 3 testing podcast channels below. If you know of other great channels please add them in the comments section.

  1. Search for\open the Podcasts app (see links below for app recommendations)
  2. Search for testing podcast e.g. Ministry of Testing, Testing in the Pub, Test Talks…
  3. Subscribe to the channel
  4. View the channel feed
  5. Download an episode that looks interesting
  6. Listen at any time

This slideshow requires JavaScript.

Podcast Terminology

Podcast – term used to refer to either a channel or an episode (confusing hey..)
Channel – a collection of podcast episodes, which users can subscribe to
Episode – an individual mp3 file, which can be downloaded or streamed
Feed – an updated list of episodes for a podcast channel

Apps

There’s lots of other podcast apps available for iOS and Android, with extra features.

 
2 Comments

Posted by on July 2, 2016 in Software Testing

 

“7 Testing Principles” – Meetup takeaways

“7 Testing Principles” – Meetup takeaways

Over lunch time today our WeTest meetup group discussion topic was the ‘7 Testing Principles‘.

Our aim was not to dissect and review the principles.. We only had 50 minutes – less the time it takes to order and pay for lunch! Instead we used the principles as a focus point to discuss relevant aspects of our current project contexts and our past experiences.

From principle 3 ‘Early testing’ – my takeaway was that testing involvement is more useful before and after architectural design rather than during architectural design. I think this could be a whole separate topic for discussion\debate, depending on other testers’ experiences and context.

There were also real-world stories of what can go wrong when testing is not done early e.g. finding issues with requirements during UAT, on an “Agile” project!

From principle 6 ‘Testing is context dependent’ – We had fun explaining different aspects of context and how they affect our current projects. My key takeaway is that context can change even within the same team and project. Eg, one company grew from 5 to 150 people, causing a major context shift.

The question was raised, “Can automation be context-driven? “. Again, that will be a good topic for a future discussion. I’d like to discuss this in more depth.

Principle 2 “Exhaustive testing is impossible” – This came up briefly, and basically we all nodded 🙂

At that point we were out of time, and we finished by unexpectedly giving Elena (a Neighbourly developer) some direct user-feedback.

This meetup was interesting, easy to organise through our existing Meetup group, and fun to attend. I encourage you to consider hosting a similar discussion group.

 
Leave a comment

Posted by on April 29, 2016 in Software Testing

 
 
%d bloggers like this: