DEV Community

Cover image for A Recipe for Successful Exploratory Testing
Daniel Kraus
Daniel Kraus

Posted on • Edited on

A Recipe for Successful Exploratory Testing

Originally posted on Medium.

In today’s agile and automated realm of software development, I often hear people ask: What actually is the role of manual testing? Assume you live in a perfect world where:

  • You always do test-driven development (TDD)
  • All your tests are automated
  • These tests produce 100% code coverage

Do you still need manual testing? Well, let’s first reflect the three points from above. TDD typically means that both production and test code get written by the same developer. In addition, these tests always test the same path. As a consequence, you need a lot of them; not just to achieve your 100% code coverage, but also to cover the risks of the system under test (SUT). And: automated tests are expensive. Not just in terms of maintenance, but also their creation — especially the higher you are in the test pyramid. Finally, many people within the testing community say that automated tests don’t do testing; they do checking:

Testing is the process of evaluating a product by learning about it through exploration and experimentation, which includes to some degree: questioning, study, modeling, observation, inference, etc.

(A test is an instance of testing.)

Checking is the process of making evaluations by applying algorithmic decision rules to specific observations of a product.

(A check is an instance of checking.)

Especially now, with AI-based testing tools emerging both in academia and industry, this is a pretty interesting topic. We can easily let machines perform automated checking, but automated testing is still an open research question.

I think a solid QA needs both. That is, smart automated checking but smarter manual testing. Automate the simple things to reveal trivial bugs and implement “[…] cleverly designed checks that monitor systems for important disturbances.” (Deep checking.) With regards to manual testing, not just do those tests that are too expensive to automate, explore and exploit the SUT constantly. This is where exploratory testing comes in. In “Exploratory Testing Explained”, James Bach defines it as “simultaneous learning, test design and test execution” and describes it as follows:

[…] any testing to the extent that the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests.

Doesn’t this sound good? Indeed, but without some guidance, exploratory testing can be a mess. Let me show you three basic ingredients, which I think make a successful recipe for exploratory testing.

1st Ingredient: Timebox ⏰

The timebox defines the amount of time available for exploratory testing, which is usually between 30 – 60 minutes. Within this time frame, people are extremely focused, which increases the chance of finding bugs. Moreover, it makes planning possible; a team can decide per sprint, week, or day, on how much time they want to spend on exploratory testing. This way, also others (e.g. developers or domain experts) can easily incorporate some testing in their daily routines.

2nd Ingredient: Testing Tour 🗺

The concept of testing tours has been around for a while. According to Michael Bolton’s blog post “Of Testing Tours and Dashboards”, the history of “touring” goes back to the mid 90’s and became more and more popular around 2005. The idea: if, for example, a tourist finds himself in a new city, he usually explores it on the basis of various topics that personally interest him, such as museums, shopping facilities, bars, etc. The very same concept can be applied to exploratory testing: instead of having no focus and no control during testing, one can specify topics around which the SUT can explored. Here are some examples:

  • Antisocial 🚷: always do the opposite
  • Garbage collector 🗑: go street by street, house by house
  • Landmark 🗽: most important features in different orders
  • Supermodel 💃: GUI only

Testing tours help to structure and organize exploratory testing so that, e.g., people don’t test the same feature over and over again, while other parts of the SUT aren’t tested at all. Using distinct and catchy names like the ones above, tours quickly become part of the team’s vocabulary and can also help new team members get used to the SUT.

3rd Ingredient: Protocol 📓

Protocols are usually quite unpopular; they require unpleasant writing effort and often they are read by no one. But: when it comes to exploratory testing, protocols can be a powerful tool. First of all, they simply record what has been tested. This allows to go back in time and see why a particular bug hasn’t been found. It also gives the opportunity to see what worked in the past and what didn’t. Furthermore, if a test case has proven itself to be very effective, the corresponding protocol(s) can be used as a blueprint for automating it. Let’s have a look at an example:

# 2018-11-02T09:30

Scope: my software
Tour: landmark
Timebox: 30 minutes
Legend: (C)orrect, (B)ug, (?) unknown

# Test Cases

## 1

1) open menu "x" via "y" - C
2) select "z" and type in "foo" - C
3) fill out form bottom to top - B

...

This protocol uses Markdown for formatting, which can be utilized to extract other formats such as HTML or PDF (e.g. with Pandoc). The protocol itself should be lightweight and informal. This encourages people to write and read it. Plus: a protocol like this can also help new team members to get on board easier.


Note that these are just loose guidelines. Some teams like to use Git-versioned Markdown files for their protocols, whereas others may prefer Excel sheets in a private cloud. Feel free to customize the entire process to your team’s/department’s/organization’s individual needs, but without disregarding the basic concepts.

In that sense, bon appétit!

Further reading:

Top comments (1)

Collapse
 
ben profile image
Ben Halpern

Wonderful post Daniel. Definitely going to refer back to this in the future.