Blog Home

Musings from James Whittaker

2011-03-23

James A. Whittaker wrote three thought-provoking series in the Google Testing Blog. I get the feeling he likes the number seven. Every one of his posts provided me with a unique and colorful take on what can sometimes be a dry subject. The abstract concepts shared in his posts shaped my foundational beliefs on testing software.

Below you will find a few sentences per post which I think capture the essence of what Whittaker is conveying in each of his posts. The final series is a bit different than the first two; it's an inside view of the structure and process inside the software shop known as Google.

The Seven Plagues of Software Testing

  1. Aimlessness - Do not test for the sake of testing. Every test should have a goal. Document what works and analyze what doesn't. Then, share with your colleagues.1
  2. Repetitiveness - Running the same test suite over again without finding new bugs does not mean that there are no bugs. Variation is healthy.2
  3. Amnesia - Chances are the problem your are trying to solve has been solved before. If the same issue keeps stinging you, or you had to answer a question the hard way, document it and put it in a place others will find it.3
  4. Boredom - A bored tester rushes through the tactical aspects of testing without considering the interesting strategic aspects. The day testing gets "figured out" is the day it gets completely automated away.4
  5. Homelessness - Testers are homeless. They don't actually live in the software like users do. Some bugs are only found with the hands of users doing their work in their environment.5
  6. Blindness - Testers require tools to provide helpful feedback from software. It's tempting to settle down with a trusty set of tools, but doing so causes self-inflicted blindness to a growing ecosystem of useful feedback.6
  7. Entropy - Testers increase entropy by giving developers things to do. This is unavoidable, but preventative. As developers do more during development, testers add less work, and entropy tends towards a minimum.7

An Ingredients List for Testing

  1. Product expertise - A good developer knows how the product works; a good tester knows how to use it.8
  2. Bill of materials - Testers should be able to reference a complete list of features that can be tested.9
  3. Risk analysis - Features are not equally important, or equally time consuming to test. Have a model to quantitatively analyze the risk of each feature.10
  4. Domain expertise - It is not enough to be good at testing. Testers also need expertise with the technologies of the domain the product operates in.11
  5. Test guidance - Whether it be technique, nomenclature, or history, testers need a way to identify and store tribal knowledge of the team.12
  6. Variation - Tests often get stale. Wasting time running stale tests is also a form of risk. Adding variation can breathe new life into stale tests.13
  7. Completeness analysis - Teams need a model to measure how well their testing efforts have covered the risk landscape of their product.14

How Google Tests Software

Engineers are loaned out to product teams on an as-needed basis. Engineers are free to change product teams at their own cadence.15

Developers own quality while testers support developers with tools and feedback. As developers get better at testing, less testers are needed. Successful teams have higher developer-to-tester ratios. 15

Development and test are not treated as separate disciplines. Developers test and testers code.16 Instead, each of the three roles look at the product from different angles:

  • SWE (Software Engineer) - Feature creators responsible for their work. SWEs design and write features, and then prove they work by writing and running tests.
  • SET (Software Engineer in Test) - Codebase caretakers who enable SWEs to write tests. SETs refactor code for testability, and write test features including test doubles and test framework.
  • TE (Test Engineer) - Product experts who analyze quality and risk from the perspective of the user. TEs write large tests and automation scrips as well as drive test execution and interpret their results.17

SETs and TEs are usually not involved early in the design phase of a product. Only when the product gains traction do they begin to exert their influence.18 19

SETs and SWEs have similar skill sets. Conversions from one role to another are common.20

Quality is a work in progress that relies on getting product out to users and receiving feedback as quickly as possible. As its being developed, a release is pushed through several channels in order of increasing confidence in quality:

  • Canary - Only fit for ultra tolerant users running experiments.
  • Dev - Used by developers for day-to-day work.
  • Test - Used internally for day-to-day work.
  • Beta/Release - Fit for external exposure.21

Tests are classified by scope, falling under three categories:

  • Small - Covers a single function, focusing on logic.
  • Medium - Covers a function and its nearest neighbors, focusing on interoperability.
  • Large - Covers an entire user scenario, focusing on business requirements.22

If a test doesn't require human cleverness or intuition, it is automated. Bug reporting is automated too.22