The Fallacy of the Single Point

 

Ever heard of the 'Fallacy of the Single Cause?'

It refers to the rarity of single causes resulting in particular effects, it turns out the world is more complex than that. Many different inputs are required to created the blended and various outputs we see in the world around us. Some may contribute more than others and at different times, but as a rule of thumb for life (and testing), pinning your hopes on one cause, is likely to leave you disappointed.

We communicate in stories, but what's the point?

This fallacy has been refined to apply to the penchant for storytelling that is intrinsic to how we communicate. The question is this. How often do you listen to a story and you take away a singular outcome or learning? Thing is, the end of a narrative is only part of that journey, a great many stories express many subtleties as they progress, especially that rich vein of intrigue and oblique learning, reality.

In my eyes, this ability to tell a story has always been critical to testing, whether in the act of testing or reflecting afterwards. 'The Fallacy of the Single Point' has significance here too. As a young tester, I thought I had found a simple formula. Surely, if you cover each requirement with one test (with a variable degree of length/scope), then you will have fulfilled the testing mission for that product? My approach tried to short circuit subtlety rather than acknowledge and compliment it. While a multi-themed narrative unfolded, I was focused on a single point on the horizon.

So, what does this mean in a testing context?

A test which proves a single point has its intoxications. It allows your mind to partition, consider a test as complete, which as far as state is concerned is unhelpful. The inherent complexity of the systems we test create an intense flux in state, making it as fallible an oracle as any other. Imposed narrowness gives rise to blindness, missing the peripheral aspects of a test, lurking just out of plain sight but affecting the outcome nonetheless. The narrowness of that approach also hampers the effective discovery and description of bugs and issues which require clarity, as the wider picture is relegated to the background. 

The opposite of this argument should also be considered. Often I will see tests which prove, this, that, the other and a little extra besides. This is often indicative of a faux efficiency (always the poorer cousin of effectiveness), but at the cost of maximising cerebral focus required for a test, try to maintain an eye on each aspect of a multifaceted test. Usually more than us mere humans can effectively handle, resulting in the crucial detail being missed or link being made.



How do we know if this is happening?

Let us use Session Based Testing as our context, with a greenfield application, where we have very little history or domain background.

When determining charters for sessions, especially early in the testing effort, we may find our focus being overly narrow or wide. There are a number of signals we can look out for to give us information about the width of our focus.

If the charters are too narrow:

"We're done already?" - Imagine a 120 minute session, part of a number of charters to explore a particular piece of functionality, focused on a business critical requirement. Suddenly, 30 minutes in, you feel like it may not be valuable to continue. Make note of this, it may be a natural end to the session but it could also be an indicator of narrow focus.

"Obviously Obvious" - You have a charter on a specific requirement and the session passes without incident, perhaps a few areas for clarity. Someone looks over your shoulder and says "well, that over to the left is obviously broken!" You've missed it. Again, make a note. Perfectly possible that another pair of eyes spotted what you didn't but it may be a signal that you have been too narrow in your focus.

If the charters are too wide:

"Too Many Logical Operators" - Your charter might look like this:

The purpose of this functionality is to import AND export map pins. The business critical input format is CSV BUT one client uses XML, the display format can be either tabular OR spatially rendered. Export can be CSV OR XML.

This charter has at least four pivot points in it where your testing will need to branch. After creating a charter, look for changes in direction, see how comfortable you are with your pivots. This signal is common beyond charters, I see it often in user stories and the like. Questioning the presence and meaning of logical operators is a behaviour I see in many effective testers.

"Can I hold it in my head?" - Our brain only has so much capacity. We all have our individual page size. Consider the charter above. Would be be able to hold all that in your head without decomposing while testing? Would you be able to effectively test it in one session? The answer is (probably) that one cannot.

Is there something simple that can be done?

You can vary the length of your leash. A time limit of your choosing to leave the main mission and explore around the functionality, returning once the limit has expired.

Sessions too narrow? Give yourself a longer leash allowing for exposure to the edges of the charter, then snapping back to the mission at hand.

Sessions too wide? Shorten the leash, keeping you within touching distance of parts of the charter you can reach realistically within the session you have defined.    

This variable leash approach enables progress while also refining the focus of your charters on an iterative basis. As we explore and learn, more effective ways to decompose the system under test will present themselves. The testing story emerges as you move throughout the system under test, the challenge is to find the right balance of focus, to ensure that we are not hoodwinked by 'The Fallacy of the Single Point.'

Comments

Post a Comment