TestCoast2024-workshopJust over a month ago Test Scouts held the second edition of Test Coast (a Software Test Conference, in Gothenburg) and we (Rob Hennersten-Manley and Pierre Alenbrink) had the pleasure of running a morning workshop. Our workshop “How do I test this?” focused on dealing with uncertainty in projects, mainly poor confusing or contradictory requirements and questionable design. Now, in this first blog post for Test Scouts, we shall reflect on one of the main sections of the workshop: context matters. 

In the below we’ve used the term “requirements”, but it might be more suitable for you to think of these as User Stories or Acceptance Criteria. 

When assessing requirements, we consider a multitude of unspoken assumptions and implicit requirements, all influenced by our individual experiences and expectations. These shape our testing, sometimes consciously but often subconsciously. There is nothing wrong with this; in fact, it is absolutely necessary, as even the smallest project wouldn’t get off the ground if there was a need to write out everything single thing in explicit detail. But us testers need to be able to make judgements on what is important; often you can handle vague requirements by leaning on your experience and the other information that is available to you, even if it is not clearly written as requirements or input documentation. Sometimes, you can proceed using those assumptions, but more exact detail will eventually be needed. However, in certain situations, it would be imprudent, unethical, or even dangerous to proceed without additional explicit details. 

As ever, when making these judgements it is vital to consider the context of what you’re working on. There are plenty of questions you can ask to help build up an expectation of behavior, for example what is the nature of the project? Who are the potential users? Are there any comparable products in the market, or is the product already established? What do the other requirements indicate? The list goes on.  

Let’s take an example from our workshop that demonstrates the importance of context. Imagine you are presented with the below, nothing else. 

REQ_NO_CONTEXT1: The user must be able to easily identify and read the current values of the following fields: 

  • AA01  
  • AA02 
  • AA03 

It’s likely that you have numerous questions and concerns about this requirement, bear those questions in mind.  

Now just add this lens – imagine the product you’re testing is a website. Immediately this gives you something to work on; the reading will likely be alphanumeric, you can compare the appearance to the styles used in other areas. If you add in the type of website and the names of the readings, you’ll be able to make more and more assumptions to aid your testing without the need for these to be explicitly detailed. Obviously, the ideal project would have comprehensive documentation with style guides and rules but all too often we don’t have that luxury. 

Now, let’s add in another layer: considering other requirements. Let’s add the below into the mix: 

REQ_NO_CONTEXT2: The user shall be able to toggle on/off the display of the following fields: 

  • AA01  
  • AA02 

Put aside any other questions and just think about the two in relation to each other. What does the second tell us and what problems arise? 

If you toggle off a reading, will it fail #1 because the user can no longer identify the reading? What happens if AA03 can be toggled?  

If we took the context of this being a website and those being numeric fields it would be easy to think of some checks to validate them, independently they could work fine and pass all tests but if you didn’t take both into account you might miss some valuable testing. 

Hopefully you can start to see the importance of thinking about the collective and not just individual requirements. What part does a single requirement play in the complete function/feature? And what other requirements might influence or impact this individual requirement you are currently analyzing? 

Now let’s add context to these and see how that changes our perceptions. Imagine the system is a social media platform and the requirements are the below: 

REQ_SM1: The user must be able to easily identify and read the current values of the following fields: 

  • Connection Requests Sent   
  • Connection Requests Received 
  • Current Connections 

REQ_SM2: The user shall be able to toggle on/off the display of following fields: 

  • Connection Requests Sent   
  • Connection Requests Received 


Immediately you can use prior experience with these types of platforms and state thinking of many more tests, with much greater precision.  

Now the question about whether the third field can be toggled has much more meaning. If you start testing and find out that “current connections” can be toggled it is a genuine question of ‘Is this a bug or a feature?’ 

That new context also hints that maybe it was a mistake missing the third field from the SM2.  

But what if we put a different context on it and said the system is a car’s dashboard: 

REQ-CD1: The user must be able to easily identify and read the current values of the following fields: 

  • Road Speed Limit 
  • Cruise Control Speed 
  • Current Speed 

REQ_CD2: The user shall be able to toggle on/off the display of following fields: 

  • Road Speed Limit 
  • Cruise Control Speed 


Suddenly the importance of that third field is more significant. It is now likely it is deliberate that the Current Speed cannot be toggled off and the ability to do that is likely a bug, probably severe one? 

But if you had not taken CD1 into account when assessing CD2 would you have tested for toggling the Current Speed? 

Hopefully this is helping you see the value in using our knowledge of the product, filling in the gaps (to a certain point) and taking into consideration the small details and zooming out to see the full picture. 

At times you will need more information to be able to complete your testing but don’t be afraid to use assumptions, tacit knowledge, comparison to other similar products and experiences to enable you to progress whilst you await more precise detail.