RSS

Tag Archives: Testing

Rapid Software Testing – Reading Recommendations

Having just completed Rapid Software Testing twice in two weeks with James Bach, I’m feeling motivated and inspired to continue learning.

Here’s a list of books recommended by James during the course. These will enhance your skills and change the way you look at testing.

RSTReading

The first book may be the most important, and the most difficult to read. I’m still getting through my copy. The content is excellent, and there’s a lot to take in.
The next 4 books are real page-turners, explaining important and complex information is a way that’s enjoyable to read.
I haven’t yet read the last book on this list.

An Introduction to General Systems Thinking by Gerald Weinberg
Thinking, Fast and Slow by Daniel Kahneman
Tacit and Explicit Knowledge by Harry Collins
Lessons Learned in Software Testing: A Context-Driven Approach by Cem Kaner, James Bach, Bret Pettichord
The Secrets of Consulting: A Guide to Giving and Getting Advice Successfully by Gerald Weinberg
Discussion of the Method: Conducting the Engineer’s Approach to Problem Solving by Billy Koen

If you’ve already read these books, I’m interested to hear your thoughts. For example, what was the biggest takeaway you got from each book, and how has that helped you with software testing?

 
7 Comments

Posted by on August 14, 2015 in Learning, Software Testing

 

Tags: , , , ,

What’s Your Context? – Workshop with Fiona Charles

How do you discover the differences in context between clients and projects, and whether those differences matter? While intuition is important, unconscious analysis and choices lead to unconscious assumptions – Fiona Charles

Ask a tester which approach is the best way to test software. The typical response will be “It depends”. But what does it depend on, and why? How will those facors affect testing?

Fiona presented her “What’s Your Context?” workshop to the Auckland WeTest meetup group. We split into 6 groups to brainstorm the elements of context that affect our approach to testing. It’s difficult to report on the value gained from attending a workshop, as the learning comes from being involved in the discussion. Here I’ve recorded my brief notes on the “Aha!” moments described by others at the end of the workshop.

Me: I’ll be doing this exercise regularly for projects I work on. More aware how much context can change during the course of the project.

Natalia: It was very useful to learn about other tester’s contexts.

Pete: Two favourite sayings are ‘It depends’ and ‘Why?’. For example, ‘Why are we doing this?’.

Morris: We say ‘team’ a lot. Testing is a team sport.

Vikas: Highlights the importance of thinking about context deliberately, instead of repeating past processes.

Shaheem: Maintain a risk focus, which things can derail or negatively impact the project.

Chris: The definition of Minimum Viable Product is heavily context-dependent.

Georgia: Focus on the problem being solved. Take history into account. Appreciate the benefit of informal communications (this arose in the context of working with offsite teams).

Vincent: Like the experience of working with his team, learning from their experiences, and the way they grouped elements of context together into personal, product, team and development methodology.

John: So many sources of information beyond just requirements. Stop and consider context first before you get started.

I encourage other testers to take Fiona‘s workshop. Her questions, insights and stories brought the exercise to the next level.
In the meantime consider pairing up with one or more testers and asking yourselves, “Which elements of your context affect your testing? Why, and how? How can you use this information to improve your testing approach for your current project?”

 
1 Comment

Posted by on November 10, 2014 in Software Testing

 

Tags: , , ,

Let’s Test Oz – Closing Keynote from Fiona Charles

The closing keynote speech of Let’s Test Oz 2014 was “The Battle for Our Hearts and Minds” by Fiona Charles.

This was the first time I’ve attended one of Fiona’s talks in person. My impressions of Fiona after this conference are that she is honest, practical, a strategic thinker and that she doesn’t mince words.

“I have seen 50-page test strategies without an ounce of strategic thinking”

Fiona Charles

“I’m not going to do bad work” Fiona Charles

The theme of this keynote was that attempts to standardise testing are stifling creativity and value, and that it’s time for testers to take back our craft. Fiona spoke of the need for testers to have the courage and tenacity to speak up about important issues when others remain quiet. This included being willing to ‘blow the whistle‘ where necessary to expose important issues which could affect people’s lives.

“We need to be able to say things that nobody wants to hear, because that’s our job”

The topic of testing standards came up more than once, as a primary cause of the long-term de-skilling of the testing workforce and the current overall state of testing processes and documentation. Using the example of a 25-page IEEE 829 compliant Test Plan, Fiona saw no project-specific content until page 12. The time taken to produce these documents is costing companies money, and contributes to testing being viewed as ‘too expensive’. The focus of testing should be on adding value to the project and to the company.

“The Master Test Plan is probably the most useless document since printing was invented”

Most of the people behind the creation of ISO 29119 stand to profit if the standard is introduced. Interestingly, Fiona’s opposition to ISO 29119 comes despite her anticipation that she’ll profit from the standard if it’s introduced. Fiona described how she has seen first-hand the damage caused by compliance to the IEEE 829 test documentation standard. She has been called in to multiple organisations to mop up the damage which that standard leaves in its wake, and she has every reason to believe that ISO 29119 would create more of the same damage.

“The quest for certainty collides with the reality of software development”

Fiona introduced the concept of “healthy uncertainty vs unhealthy certainty” while debunking the notion that popular test metrics are useful. She covered some key attributes of great testers, and they’re not the ones you see listed in jobs ads: Integrity, Independence of Mind, Courage, Engagement…

I really enjoyed this talk. It was motivational, inspirational and a call to action for all testers.

Recommended reading\viewing:
The slides from this keynote are available from the Let’s Test Oz website.
Breaking the Tyranny of Form blog post – Fiona Charles
Delivering Unwelcome Messages EuroSTAR webinar – Fiona Charles
Slides from We are the 99% – Anne-Marie Charrett

All quotes in this post are from Fiona Charles’ keynote.

 
1 Comment

Posted by on October 10, 2014 in Software Testing

 

Tags: , , , ,

Equality at IT conferences

At the Let’s Test Oz conference I actually forgot about gender inequality in IT for three days.

I don’t complain about gender inequality in the workplace per se, for the usual reasons which I try not to complain in general. It’s not efficient, it’s not effective, and no-one listens anyway… I prefer to act. For example:
– Supporting and advocating for my team members who’ve escalated issues of harassment (albeit to no avail).
– Working to ensure that team members can return to work part-time after a career break if required.
– Championing pay rises for team members who are comparatively underpaid (and usually aren’t aware that they’re underpaid).

At most IT events I can’t help but notice that I’m in the minority. At CITCON 2014 approximately 10% of attendees were female. At a recent Splunk seminar in Auckland less than 5% of us were female, and that was confronting. On the other hand I’ve gotten used to management meetings with predominately male co-workers, because that’s the norm today.

The Let’s Test Oz 2014 conference was an exception. During the conference wrap-up Anne-Marie Charrett observed an almost equal number of men and women at the conference, both speakers and attendees. I looked around and saw that she was right, and then I allowed this to really sink in… The gender distribution at the conference matched the real world closely enough that I’d gotten through an entire IT conference without noticing the percentage of women attending. That’s progress!

While writing this I realised that I’ve never noticed gender at the Auckland Testers Meetups either, again because attendance is representative of the real-world. I think it’s great to notice and celebrate these examples of progress.

I’ve written for the Women Testers magazine and attended the Women in Tech meetup to support both initiatives, to further my learning, and to build my professional network and industry profile. I was silently cheering for the few men at the last Women in Tech meetup because they genuinely came to listen and learn about the issues women face. We could do with more of that in tech.

 
Leave a comment

Posted by on October 10, 2014 in Equality

 

Tags: , , ,

ER: Learning exercise on the Implicit Principles of CDT

This is a detailed experience report on a learning exercise I completed recently, following my post about James Bach‘s opening keynote at Let’s Test Oz (LTO) 2014:

…There was one slide in particular which I could’ve questioned James on for another hour, called Implicit principles of the Context-Driven School of Testing. This slide contains ideas which could fill a book, if James had time to write another book.. I think I need to read some more books before I can fully fathom the concepts presented! The beauty of this conference is that I have many opportunities to find James in the hotel and ask about this slide in more detail…

On day 3 of LTO I tracked down James and we spent the better part of an hour walking through these principles. Please click on the image below and take a moment to review them in detail.

Implicit Principles of CDT - Slide from James Bach's Keynote at Let's Test Oz 2014

Presented by James Bach at Let’s Test Oz 2014

At first glance I found these Implicit principles intimidating. In hindsight, I’ve attributed this to 3 main reasons:

1. My immediate impression was that the principles are based on a lot of assumed knowledge. I’ve since concluded that these ideas stand alone, and the impression of assumed knowledge was due to the number of unfamiliar terms. In fact the only assumed knowledge is a basic understanding of software testing and an above average English vocabulary.

2. The principles include words which I couldn’t define, such as primacy, non-linearity, cybernetic and authorship. I’ll define these simply here, in my own words:
Primacy – Most important
Non-linearity – Unpredictability
Cybernetic – React to observations
Authorship – Creation

3. I couldn’t understand the reasons behind the creation of these principles. If James covered this in his talk then I missed it. I was wondering what was wrong with the original list of seven basic principles (shown below). I wanted to understand not just the new principles themselves, but also the need for them, and the thought processes that went into creating them. As an aspiring CDT practitioner, understanding this process of articulating the CDT approach is important to me.

Original CDT Principles by Cem Kaner and James Bach

Original CDT Principles by Cem Kaner and James Bach


The Learning Exercise

While discussing these principles with Anna Royzman over dinner, I mentioned that the new list seemed unapproachable for the masses. Anna agreed with the need for a shorter and more simplistic version, which I’ve called the marketing version. Anyone reading the marketing version and looking for more information could read the full version of these principles to find out more context and detail.

I approached James about this on Skype and offered to create a first draft for review, to get the ball rolling. James disagreed with the need for a marketing version of the principles, but he did agree to support me in the process of creating it. I was quite confident in the need for a new version…

Draft 1 of my marketing version

I have to remind myself now that I was proud of this version when I initially created it, while the ink was still wet. I felt that I’d captured the key points from James’ 10 Implicit principles and presented them in a user-friendly way. Here’s what I came up with:

  1. We constantly adapt test processes to changes in our real-world environment (covers point 1 from updated list)
  2. We learn through investigation and present facts based on evidence (covers points 2 & 4)
  3. Due to systems complexity, we observe and react to uncertainty (point 3)
  4. Systems are developed by people, for people (point 5)
  5. Testers have a duty to add value and behave ethically (point 6)
  6. Testers work with the team and share responsibility for quality (point 7)
  7. Testers must continue to learn a variety of skills and practices, in order to adapt processes to suit each project (covers points 8, 9 and 10)

I sent this list to James for review, and we started to review point 1 together. Based on James’ questions and comments, I had a basis to further review my own work. These are my brief notes from that review.

Self-review 1:
1.
2. ‘Facts based on evidence’ doesn’t seem to tie in with heuristics.
3. No longer sure that this sentence makes my point clear
4. Therefore… what?
5. I want\need to do this, it’s not just a duty. This sentence [as written] currently applies to everyone in the company…
6. Sounds like I’m describing Agile testers. “The team” – which team?
7. This point isn’t terrible.

This version is vacuous. I’ve abstracted too far, it could almost be describing procedure-driven testing. Start again…

Taking a Step Back

During Skype coaching James posed the question, “If you had to tell someone a few things that would get across to them the *gist* of CDT what would that be?” This expanded the learning exercise for me, as I’d have the freedom to create a whole new list, rather than devising a simplified version of the 10 implicit principles.

I decided to first work out which concepts were lacking in the 7 Basic CDT Principles, in order to understand the need for the 10 Implicit CDT Principles. I came up with a working list of the differences (shown in the image below, initially without the information in italics).

Comparison of CDT Concepts

Comparison of CDT Concepts

While reviewing both lists so closely, I gained a greater appreciation for the initial list of principles. I really started to doubt the need for a new short version, and I also doubted whether my creation could come anywhere close to the original.

When reviewing some of the differences I’d identified with James, he said something very interesting. His updated list of 10 principles was not intended to replace the original list! Therefore the original list still stands, and my marketing version is redundant. The new list was designed to clarify the implicit concepts which were behind the creation of the original principles.

The implicit principles were created by the main principles…
… I wrote the ten by asking myself what did I mean by the seven and what must necessarily be true to achieve them.

This was my aha moment. Now I understood that there was no need for a new short list. The original list was not being replaced at all, only expanded upon in more detail.

Conclusion

Although I’d started out on a fool’s errand due to a misunderstanding, this turned out to be a productive learning exercise. I can now speak confidently about the principles of context-driven testing and will continue to champion them. I’ve suggested two changes to James which he has agreed with. The first was the re-introduction of the concept of software solving a problem. The second is that community status was not implicit in the original principles and has been added in this new version.

This exercise has prompted me to finish reading Thinking, Fast and Slow by Daniel Kahneman, analysing my own thought processes.
And finally, it helped me to “keep my brain sharp” between contract roles.

Notes on Complex Language

I remain concerned that the complex language used in these principles will hinder the widespread communication and adoption of the concepts.

My concerns are valid, according to this paper by Daniel M. Oppenheimer – “Consequences of Erudite Vernacular Utilized Irrespective of Necessity: Problems with Using Long Words Needlessly”. Essentially, as the level of complexity increases the text becomes harder to understand and less likely to be accepted by the reader.

Oppenheimer’s emphasis here is on ‘needlessly’ complex words.

…there are many times when a long word is appropriate, because it is more precise or concise…
…select the most appropriate word for a given argument such that decreases in fluency are overridden by increases by other positive attributes…

What are your thoughts?

Are precision and brevity (conciseness) more important than first impressions in this case?
Does the original set of principles serve the needs of newcomers adequately, lessening the need for the new version to be approachable?

 
4 Comments

Posted by on October 7, 2014 in Software Testing

 

Tags: , ,

Let’s Test Oz 2014 – Day 1

It’s 11PM and day 1 hasn’t finished yet, there are activities still happening around the hotel. This is really a different style of conference than I’m used to. All participants stay at the same hotel where the keynotes, breakout sessions and test lab take place, so the conference doesn’t actually end at a specific time each night.

I arrived this morning in the picturesque Blue Mountains outside of Sydney. I was excited, nervous, intimidated, keen, and relieved to have made it here. There was a contingent of testers already here and the conferring had begun before the conference’s opening keynote. About 5 minutes after entering the hotel, most thoughts of intimidation and nervousness were gone. This is where I was meant to be. I mingled and promoted twitter as a means for learning more about the context-driven testing community.

The opening keynote was delivered by James Bach:

How do I know I am context-driven?

What followed was a wealth of information based on years of research, hands-on experience and debates, condensed into a one-hour talk. This was an excellent summary of what it means to be context-driven, from one of the founders of the context-driven testing community. There was one slide in particular which I could’ve questioned James on for another hour, called Implicit principles of the Context-Driven School of Testing. This slide contains ideas which could fill a book, if James had time to write another book.. I think I need to read some more books before I can fully fathom the concepts presented! The beauty of this conference is that I have many opportunities to find James in the hotel and ask about this slide in more detail, ask for advice on recommended further reading, and discuss testing in depth.

As usual, I found James’ talk personally motivating and compelling. Specifically, the categorisation of levels of involvement in the context-driven community felt to me like a call to action and I’ve treated it as such. I will be actively ensuring that I fall into the Committed Practitioner category, and probably also Committed Student as I love to keep learning.

These are some of my favourite quotes from James‘ keynote speech:

“A professional society of people trying to be the best they can be” – Yes! This is a growing crowd which I’m proud to be a part of.

“Respect and nurture people who are learning” – James noted the Greeting vs. Challenging methods of introducing testers to the context-driven community, and his tendency towards the latter. There are other leaders in the test community who patiently introduce those who are newly discovering professional testing approaches.

“The product is a solution. If the problem isn’t solved, the product doesn’t work”. Hallelujah!

“Testing has parallels with martial arts, you need to practise, and EARN respect” – I’m paraphrasing here.

“Context-driven testers must be able to answer the question ‘What’s your approach to testing?'” – Oops. I have some homework to do.

“Instead of best-practice, say a practice. For example, we use a practice for defect management.” That’s a huge improvement! This would make the software world a better place 🙂

“You don’t need to promise, quantify or lie. See Keith Klain for more information.” I’m paraphrasing again. This was another call to action for me.

In conclusion…

This is what I got from the FIRST HOUR of this 3 day conference. I’m glad I made the effort to attend, I’d have deeply regretted missing out.

Stay tuned, more to come. But not tonight 🙂

 
3 Comments

Posted by on September 15, 2014 in Software Testing

 

Tags: , ,

Judging the Software Testing World Cup – Oceania 2014

Preparation
When I was asked if I’d be interested in judging the Software Testing World Cup Oceania event this year, I said “Sure, why not?”. I actually hadn’t heard of it before. From the little information available on the website at the time, I didn’t have a good understanding of the sheer scale of this event.

When I received the first group email communication from Maik I realised that preparations were already well underway. I was overwhelmed at first by the large number of judges who were already involved, and impressed when I recognised some of their names from the online software testing community.

Once we had all agreed on dates for the regional competitions, it seemed like no time at all until the first competition was held in North America. Immediately after the competition there were group emails flying back and forth about test reports, bug details, reproducibility, product owner engagement… I read every email but most of them went over my head. It didn’t occur to me that I could ask Maik & Matt for access to the test reports and the bug tracking tool in order to follow the process more closely. I assumed I wouldn’t be allowed access by the product owners. I also know now that there’s a recorded YouTube stream video of each competition available for public viewing, which I didn’t realise at the time. Mostly I was just feeling overwhelmed and thinking, “We’re next”.

A few days later the Oceania competition was approaching and I asked if the North America judges had any blog posts or ‘Lessons Learned’ to share with the judges of upcoming events. But of course they were still in full-swing of actual judging, on top of their day jobs, and hadn’t had a chance to put anything together yet.

Competition
On the night of the competition, I sent out an email 1 hour before the start time basically asking, “What should I be doing?”.
That lead to some emails from Matt and 30 minutes later I had about 7 tabs open in Firefox on my laptop and I was feeling very confused. We were using Skype, Twitter, Google Drive, YouTube, email, Google Hangouts, HP Agile Manager and SUT concurrently.  I started familiarising myself with SocialText, our SUT. I hadn’t realised I’d be on camera and recorded on YouTube, but luckily I wasn’t in my pyjamas.

Due to some technical issues (read: user error – long story) I was hearing the audio live from Google Hangouts and with a 5 second delay in stereo from YouTube. I wasn’t able to mute one without also muting the other. When I explained the problem Sigge sent me a link to view comments on the live stream without viewing the video, and then I could start to participate properly. I’d lost more than 20 minutes worth of product explanation from the product owner unfortunately. The other regional judge Dean joined just after I did, I think he was having his own technical issues.

For the remainder of the competition, I was Alt-Tabbing my way through Firefox:
– Monitoring YouTube comments and reporting participant questions to the product owners verbally
– Typing up answers to the questions back on YouTube for quick reference
– Constantly switching YouTube comments view from Top Comments to Newest First so I could see if any new questions had been asked (grumble grumble)
– Reviewing the judging categories in Google Drive
– Posting to Twitter, just for fun
– Watching the product owner video stream and chatting to other judges in Google Hangouts
– Reading the bugs that were being raised
– Trying to repro some of the more interesting bugs in the SUT
– Emailing participants who were having technical issues
– I was also sending the occasional one-on-one chat message to other judges using Skype.

About 2 hours into the competition, the participant questions had settled down while they got on with raising defects and writing their reports. As soon as we had a moment to think, all of the regional judges in Oceania were asking ourselves the same question:

If this was so complicated and hectic for 3 regional judges with xx a cup load of teams, how will the 3 regional judges for Asia cope with xxx a bucket load of teams?

Judging
From the outset we had two goals for judging the competition:
1. Judge the participating teams and agree on a winner for the region.
2. Think of ways to make the Asia competition manageable for judges and participants.

Every bug was read by one or more judges. Like the other judges, I tried to reproduce the interesting bugs on my own systems. Before even reading the test reports, I could get a feel for which areas each team had focussed their time on.

Every team’s entry was judged by at least 3-4 of the 5 judges. If the discrepancy between scores was above average, we’d judge that team’s entry again until we agreed on the overall score. This is a very fair approach, and also very time consuming.

Each day after work, the local judges were online judging entries. We didn’t coordinate to judge at the same time, we all just tried to complete the task expediently, around our existing schedules. We could see each other online in Skype, updating the judges notes online, creating strange things in the SUT and updating the status of defect reports. I found it helpful to be able to jump on Skype and ask for opinions and clarifications in real-time.

During the judging I found more than a few bugs in the defect management tool. As a thankyou for their sponsorship (read: because the bugs were annoying me so much), I was planning to report the bugs to HP somehow. But now that I’ve finished using the tool I’ve lost motivation for reporting the bugs to HP.

I don’t like to say too much more about the judging publicly until after the final STWC event in November. I’m happy to talk it through with other judges in our Google Group in the meantime.

After the judging I put together some metrics to show myself once again just how useful metrics are. Suffice to say, there was no metric I could find that correctly predicted the winning team. I didn’t try very hard though. Add a comment below if there’s a metric you’d like me to generate from our competition data, and I’ll let you know if it would have proven useful for judging the quality of the teams’ work.

Tips for Judges
1. Like most volunteer work this was very rewarding, particularly when you find people in your region who are as passionate about testing as you are.

2. This is an excellent learning experience, as you would expect. As with participants, this is your chance to show your professionalism, work with peers in your region, and learn about different approaches to testing.

3. You don’t need to be in the same region as the competition in order to judge it. There are xxx many teams registered for the upcoming Asia competition and I see 3 local judges listed on the website. Are you free to assist with judging? Why not contact Maik or Matt today and offer to help (perhaps include a link to your LinkedIn profile or twitter account).

4. Why not write a blog about your experience? It helps you to capture and remember what you’ve learned, and it may help a judge of the next competition who is feeling unprepared and unsure of what to expect.

5. Have some drinks and snacks within reach (or in my case, a great partner within earshot).

6. This is your regional competition. Be actively involved during the competition, speak up during the live chat, ask clarifying questions…

7. Set aside a few days\evenings after the competition to focus on judging.

8. If you have an idea to improve the judging in any way, speak up to Matt and Maik. They’re experienced in this process now, and will be able to discuss it with you further.

9. Sigge, Dean and I have come up with some ideas for processes and improvements, and fed those back to Matt and Maik for consideration.

Some tips for Participants
1. This competition is intensive and participant’s attention is pulled in multiple directions for the duration of the competition. None of the Oceania teams with 1 member completed their Test Report. Most of the 2 member teams also failed to complete the competition. In my opinion, if you are a team of 1 or 2 people, please consider joining up to form teams of 3 or 4 people.

2. If at all possible, co-locate with team members for the competition. Or if you have video conferencing available at the office, use it. There is a lot going on at once, and ease of communication with team members is key.

3. Bug quality is so much more important that quantity. I could write a whole blog post about this point alone, but I shouldn’t really have to for this target audience 🙂 Brush up on your bug advocacy skills.

4. Have multiple monitors available if you can. At a minimum, try to have one screen where the whole team can see and hear the live stream video of the product owner, in addition to everyone having their own screen to work on.

5. Get familiar with the defect reporting tool in advance. Learn how to attach screenshots to issues, for example.

6. The competition has a strict finish time. That’s the deadline to have all bugs and test reports submitted by, not 30 mins later and not the next day 🙂

7. You can find additional tips on twitter if you search for #STWC2014 (If you’re not on twitter yet, read this and then join twitter).

8. Have fun with your team! This is an opportunity to work closely with colleagues and peers. While it’s a simulation of a high-pressure project with a very short timeline, it is just a simulation.

Good luck!

 
2 Comments

Posted by on May 17, 2014 in Software Testing

 

Tags: , ,

 
%d bloggers like this: