Tuesday, 7 October 2014

The Procrustean Bed of ISO29119


The old stories can teach us a great deal. Every once in a while I see the parallels between antiquity and the present, shown through the lens of one of these stories.

The tale of Procrustes (first introduced to me by the work of Nassim Nicholas Taleb, he writes with great skill and knowledge) and the introduction of the "ISO29119 standard" resonate with each other in my mind.

The Tale of Procrustes in a Nutshell......
"Procrustes kept a house by the side of the road where he offered hospitality to passing strangers, who were invited in for a pleasant meal and a night's rest in his very special bed. Procrustes described it as having the unique property that its length exactly matched whomsoever lay down upon it. What Procrustes didn't volunteer was the method by which this "one-size-fits-all" was achieved, namely as soon as the guest lay down Procrustes went to work upon him, stretching him on the rack if he was too short for the bed and chopping off his legs if he was too long."
(Source : mythweb.com)

So, lets adapt this for our ISO29119 situation:
"The "ISO29119 standard" purports to be the only internationally-recognized and agreed standards for software testing, which will provide your organization with a high-quality approach to testing that can be communicated throughout the world. Advocates describe it as having the unique property that it offers offers a set of standards which can be used in any software development life cycle. What the advocates don't volunteer is that your business problem will need to be stretched or trimmed to meet the new standard. So rather than testing solving your business problem, the focus will be to deliver to the standard."
Who will be Theseus for the Craft of Testing?

In the end, Theseus (as part of his tests) dealt with Procrustes using his own vicious device. However this will most likely not be the case here, I believe most thinking testers are advocating the opposite, continuing to champion the principles of Context Driven Testing. Rightly so, merely rubbishing standards is only one half of the argument. I sincerely hope our community of minds will be our Theseus but time will tell. The uptake of the "ISO29119 standard" is an unknown, concerns are probably in large organisations and government, where group (and double) think can be prevalent, these are the soft targets for peddlers of the cult of the same. 

However all over the development world we desperately and continuously strive to leap into Procrustean Beds. Taking shallow solace in "standards", which humans have been doing for a long, long time as a proxy for thought. Once you jump into a Procrustean Bed, you never emerge quite the same.......

Consider investigating..................
http://www.amazon.co.uk/The-Bed-Procrustes-Philosophical-Practical/dp/0241954096
http://www.ipetitions.com/petition/stop29119
http://www.professionaltestersmanifesto.org
http://www.softwaretestingstandard.org
http://www.ministryoftesting.com/2014/08/iso-29119-debate


Friday, 3 October 2014

N things I want from a Business Analyst....

  
Business Analysts. I give them a hard time. I really do. I love them really but I couldn't eat a whole one.

Is something I used to say.

I even went to a Business Analyst meetup once and asked them if they thought they should still exist in our "agile" world or are they being washed away by the tidal wave. Looks can really hurt, in fact they can be pretty pointy.

I wouldn't do that now though, I think I've grown up a bit. Like any great programmer or tester they can really add to a team. And, conversely, like a really poor programmer or tester they can really do some damage. It was unfair to single them out and very possibly bandwagon jumping of the worst kind.

In addition, I fell into a common trap. I was full of hot air when it came to what was bad about Business Analysts, but could not articulate what might make them great.

So here goes............
  • I want a vivid (preferably visual) description of the problem or benefit - lets face it, none of us are George Orwell. We can't describe in words with clarity and paucity, all the complex concepts present in our lives. However, we can deploy many techniques bring flat material to life. Elevator pitches, mind map, product boxes, models, personas and the like are your buddies.
  • I want you to shape a backlog, not provide a shopping list - hearing a backlog described as a shopping list leads me down a path of despair. A backlog is themed beastie, which needs to be shaped. Delivering the stories in a backlog will not implicitly solve the problem, no more than a lump of marble and a hammer/chisel constitutes a statue. Items in a backlog are raw materials. They need sculpting with care to achieve their goals.
  • I want you to work in thirds - for you lucky so and so's who are trying to figure out how on earth to cope in the agile tsunami which is enveloping the world, here's a rule of thumb for you. One third, current sprint, one third, next sprint, one third, the future. The remaining 1% is up to you.
  • I want you to be technically aware but not necessarily technically proficient - technical awareness is a beautiful thing, many testers are a good way down this path. Knowing the strengths and weaknesses of given technology helps you to realise business benefits, because you can appreciate the whole picture, the need, the constraints, the potential.
  • I want you to really, really try with para-functional requirements - This is in two parts, the response times/capacity/scalability the business needs for the real world, coupled with the constraints of the technology deployed. The answer will be somewhere in the middle. If there is anything I have learnt about performance testing especially, is there are few absolutes, para-functional requirements should reflect that subtlety.
  • I want you to be experts in change - in fact you guys should love change deeply, being able to extol its benefits and risks. Helping teams to help their stakeholders realise the value of change in their marketplace. Not snuffing it out to protect business goals which time has rendered of dubious value. 
  • I want you to distinguish between needed and desired - this burns me deeply. The old chestnut about a small percentage of the product actually being used (linky) is serious business. By not determining the difference between what is needed and what is desired, products are being happily helped to fall silently on swords forged by Business Analysts who struggle to articulate this critical difference.
  • I want you to recognise that stories/use cases/whatever are inventory - Imagine the backlogs as a factory, piles of stuff everywhere that our brains are trying to navigate around, winding a path through these piles of stuff trying to find what we need. This takes time and steals from flow, which we can't afford to lose. Before you add it, stop and consider for a moment, whether or not you need it right now.
  • I want you to challenge really technical people to justify value - "Well, we'll need a satellite server configured with Puppet to centralise our upgrade process." Huh? We will? What value does that give the business? Is what I want you to ask. Anything worth building, should be worth articulating as a value proposition.
  • I want you to take ownership of the product goddammit - There ARE decisions you can and should make. If you wish to survive the agile tsunami its time to embrace that change is king, and it means decisions. Big and small, narrow and wide, they are there to be made. By you. YOU.
  • I want you to continuously improve and I'll be watching - I would never want you to do 10 things to improve yourselves. 'N' things please, ever changing in focus to ensure you are delivering value in the contexts you find yourselves.

Basically I want you guys to be superhuman. I think you can be.

Some say being a Business Analyst is old hat. I say it is a gift. But only if you embrace it.

Wednesday, 30 July 2014

The 'Just Testing Left' Fallacy

I am mindful that many of my blogs are descending into mini tirades against the various fallacies and general abuse of the lexicon of software development.

Humour me, for one last time (thats not true by the way).

In meetings, at the Scrum of Scrums, in conversation, I keep hearing it.

    "There's just testing left to do"
And then I read this:

http://bigstory.ap.org/article/social-security-spent-300m-it-boondoggle

An all too familiar software development tale of woe.




I thought; 'I bet everyone on that project is saying it too.' Next to Water Coolers, Coffee Machines, at the Vending Machine, in meetings and corridors.

At first, it gnawed at me a little.

Then a lot.

Then more than that.

I have three big problems with it:

  1. It's just not true. There is not 'just testing left.' What about understanding, misunderstanding, clarifying, fixing, discussing, showing, telling, checking, configuring, analysing, deploying, redeploying, building, rebuilding and all the small cycles that exist within. Does that sound like there is 'just testing left?' When I challenge back and say, "You mean there's 'just getting it done left?'" I get an array of raised eyebrows. 
  2. Its an interesting insight into how an organisation feels about testing. The implication of such statements about testing might be extensions of; end of the process, tick in the box, holding us up, not sure what the fuss is, my bit is done, its over the fence. Most affecting for me is the inferred: "We are not sure what value testing is adding?"
  3. On a personal level, it's not 'just testing.' Its what I do. And I'm good at it. It involves skill, thought, empathy and technical aptitude. I'm serious about it. As serious as you are about being a Project Manager, Programmer, Sys Admin and the rest.

I wouldn't want to not look into the flipside of this argument (latest neurosis).

What about testers who say:

    "I'm just waiting for the development to finish before I can get started"
What are the implications here then? Perhaps there is less understanding of how damned hard it is to get complicated things to JUST WORK. Never mind solve a problem. I used to make statements like this. Until I learnt to program. Then I found that even the seemingly simple can be fiendish. And people are merciless in their critique. Absolutely merciless. Not only from the testers, from senior managers who used to be technical can't understand why it takes so long (mainly because they have forgotten how complicated it can get, filtering out their own troubled past) to build such a 'simple' system.
 

And if I start hearing; 'there's just QA left'...................

Sunday, 13 July 2014

The name of the thing is not the thing


I often ponder the question 'should we care about what we call things in the software development world?' One could argue that as long as everyone has a common understanding, then it shouldn't matter right? I rarely see a common understanding (which is good and bad in context), suggesting that we do care enough to name things but sometimes not enough to care about the amount of precision those names have.

Gerald Weinberg quotes in the excellent 'Secrets of Consulting' that 'the name of the thing is not the thing.' As a tester (and critical thinker) this represents a useful message to us. The name given to a thing is not the thing in itself, its a name and we shouldn't be fooled by it. This is a useful device, as I believe the name is an important gateway, to both understanding and misunderstanding, and names take root and spread.....

Testing is not Quality Assurance

There are probably a great many blogs about this, but I hear/see this every day, so it needs to be said again (and again, and again).

The rise of the phrase 'QA' when someone means 'testing' is continuing without prejudice. Those of us who have the vocabulary to express the difference are in a constant correction loop, considered pedants at best, obstructive at worst.

What is at the root of this? The use of interchangeable terms carelessly (where there is no paradigm for either side of the equation and/or a belief there is no distinction), then wonderment at how expectations have not been met. 

So how do I respond to this misnomer?

(Counts to ten, gathers composure) 

Superficially - 'Testing cannot assure quality, but it can give you information about quality.'

If someone digs deeper?

Non superficially - 'When I tested this piece of functionality, I discovered its behaviours. Some are called 'bugs' which may or may not have been fixed. These behaviours were communicated to someone who matters. They then deemed that the information given was enough to make a decision about its quality.'

This feels like a long journey, but one worth making. I will continue to correct, cajole, inform and vehemently argue when I need to. If the expectations of your contribution are consistently misunderstood, then will your contribution as a tester be truly valued?

Test Management Tools Don't Manage Testing

On a testing message board the other day (and on other occasions) I spotted a thread containing the question; 'Which 'Test Management Tool' is best (free) for my situation?' There are many different flavours, with varying levels of cost (monetary and otherwise) accompanying their implementation.

I thought about this statement. I came to the conclusion that I dislike the phrase 'Test Management Tool' intensely. In fact, it misleads on a great many levels. On a grand scale, as its name does not describe it very well at all. It offers no assistance on which tests in what form suit the situation, when testing should start, end, who should do it, with what priority, with which persona. Not sure such a tool manages anything at all. 

So what name describes it accurately? For me, at best it is a 'Test Storage Tool.' A place to put tests, data and other trappings to be interacted with asynchronously. Like many other electronic tools it is at worst it is a 'Important Information Hiding Place.' To gauge this, put yourself in another's shoes. If you knew little about testing and you were confronted with this term, what would you believe? Perhaps that there is a tool that manages testing? Rather than a human.

So what.....?

So, whats the impact here? I can think of a few, but one springs to mind.

If we unwittingly mislead (or perpetuate myths) by remaining quiet when faced with examples like the above. How do you shape a culture which values and celebrates testing? By not saying anything when what testing is and the value it adds are diluted, misrepresented and denigrated certainly helps to shape that culture. Into something you might not like.

Friday, 4 July 2014

Software Testing World Cup - An Experience Report



After much anticipation, myself and three of my colleagues embarked on the Software Testing World Cup journey in the European Preliminary. We had prepared, strategised, booked rooms/monitors, bought supplies and all the other (actually quite long list) of tasks to get ready for the big day. Armed with the knowledge that I would be jetting off on holiday the following day, we entered the (metaphorical) arena to give it our all and hopefully have a little fun. Here are my thoughts about 3 interesting (exhausting) hours.

When I reflect.....

  • Over my testing career, I have learnt to really value time to reflect. Study a problem, sleep on it, speak to peers for advice, come up with an approach. The time just doesn't really exist (in the amount that I needed it) during the competition, which made me uncomfortable. A little discomfort can teach you a great deal, and indeed amplified the more instinctive part of my testing brain.
  • Following on with the above, I'm happy to say I kept my shape. When your instinctive side (coupled with the deep rooted, long learned behaviours) becomes more prevalent, you can, well, go to pieces a little. I didn't. I listened to the initial discussions with the Product Owners, stuck to time limits, continued to communicate, maintained the Kanban board we had set up, all healthy indicators of some useful learned behaviours!  
  • We did quite a lot of preparation and research. We met up a couple of times as a group to discuss approach and the rules of the competition which helped massively, it helped me to discuss the rules as a group and we quickly built a common understanding. Our preparation went beyond the competition, discussing bug advocacy, the principles of testing in a mobile context to name but a few. However, as we know, very few strategies survive first contact, and our overall strategy was no exception! 
  • HOWEVER, I do believe we pivoted our strategy nicely on the day, enabling us to broaden our focus due to the scale of the application and number of platforms. As a team, we decided to familiarise with each area (we had broken down into chunks) on our desktops within a browser, then move on to a specified mobile device (given a steer that iOS would be critical).
  • Finally, I thought it was a really great thing that we decided to be in the same room as a team, really boosted our ability to validate each others defects and check in at important times, such as when we were adding to the report.

Now, about the competition itself......

Good!

  • Adding a mobile aspect really created fertile ground for bugs. In fact, I could have raised bugs for the full 3 hours, but the competition was about much more than that. This made the challenge a little different, as it would have been easy just to bug away and lose all perspective. 
  • The small hints before the preliminary were helpful too, allowing us to queue up devices and reach out to our colleagues who had done mobile testing in depth.
  • We had our HP Agile Manager (good grief, the irony in that title) test login's nice and early which was really helpful for familiarity, although a part of me wished I could have tested that system instead! We got logged in to the real project on the day without any issues, although I'm not sure it was the same for everyone. 

Could be better.....

  • A narrower focus of application would have improved the quality and challenge of the defects. To slightly contradict the above, the scope of the application under test was TOO wide! Perhaps a narrower challenge with slightly more gnarly, awkward bugs to find, I guess I felt I didn't have to work hard (at all) to find bugs, never mind the most important ones.
  • Engaging with the Product Owners was a challenge. While I can see that having one giant pool of questions was advantageous to the wide dissemination of information, I would have liked to have seen teams assigned to one (or a pair of) Product Owner. This would have enabled building up more of a rapport, especially as this was one of areas teams would be judged on. 
  • Practically speaking, the start was a little chaotic, moving from streaming URL to streaming URL, but after 10 minutes or so we go there. This reflects so many experiences in the software development world (projects) where we need to find our rhythm.

I think I (we) could have done better. However I always think that about everything I do, part of what keeps me pushing forward with my career. To participate was the key here though, plus I always appreciate a little testing practice, now I'm a little more 'senior' don't always get the chance!

Friday, 30 May 2014

The Fallacy of the Single Point

 

Ever heard of the 'Fallacy of the Single Cause?'

It refers to the rarity of single causes resulting in particular effects, it turns out the world is more complex than that. Many different inputs are required to created the blended and various outputs we see in the world around us. Some may contribute more than others and at different times, but as a rule of thumb for life (and testing), pinning your hopes on one cause, is likely to leave you disappointed.

We communicate in stories, but what's the point?

This fallacy has been refined to apply to the penchant for storytelling that is intrinsic to how we communicate. The question is this. How often do you listen to a story and you take away a singular outcome or learning? Thing is, the end of a narrative is only part of that journey, a great many stories express many subtleties as they progress, especially that rich vein of intrigue and oblique learning, reality.

In my eyes, this ability to tell a story has always been critical to testing, whether in the act of testing or reflecting afterwards. 'The Fallacy of the Single Point' has significance here too. As a young tester, I thought I had found a simple formula. Surely, if you cover each requirement with one test (with a variable degree of length/scope), then you will have fulfilled the testing mission for that product? My approach tried to short circuit subtlety rather than acknowledge and compliment it. While a multi-themed narrative unfolded, I was focused on a single point on the horizon.

So, what does this mean in a testing context?

A test which proves a single point has its intoxications. It allows your mind to partition, consider a test as complete, which as far as state is concerned is unhelpful. The inherent complexity of the systems we test create an intense flux in state, making it as fallible an oracle as any other. Imposed narrowness gives rise to blindness, missing the peripheral aspects of a test, lurking just out of plain sight but affecting the outcome nonetheless. The narrowness of that approach also hampers the effective discovery and description of bugs and issues which require clarity, as the wider picture is relegated to the background. 

The opposite of this argument should also be considered. Often I will see tests which prove, this, that, the other and a little extra besides. This is often indicative of a faux efficiency (always the poorer cousin of effectiveness), but at the cost of maximising cerebral focus required for a test, try to maintain an eye on each aspect of a multifaceted test. Usually more than us mere humans can effectively handle, resulting in the crucial detail being missed or link being made.



How do we know if this is happening?

Let us use Session Based Testing as our context, with a greenfield application, where we have very little history or domain background.

When determining charters for sessions, especially early in the testing effort, we may find our focus being overly narrow or wide. There are a number of signals we can look out for to give us information about the width of our focus.

If the charters are too narrow:

"We're done already?" - Imagine a 120 minute session, part of a number of charters to explore a particular piece of functionality, focused on a business critical requirement. Suddenly, 30 minutes in, you feel like it may not be valuable to continue. Make note of this, it may be a natural end to the session but it could also be an indicator of narrow focus.

"Obviously Obvious" - You have a charter on a specific requirement and the session passes without incident, perhaps a few areas for clarity. Someone looks over your shoulder and says "well, that over to the left is obviously broken!" You've missed it. Again, make a note. Perfectly possible that another pair of eyes spotted what you didn't but it may be a signal that you have been too narrow in your focus.

If the charters are too wide:

"Too Many Logical Operators" - Your charter might look like this:

The purpose of this functionality is to import AND export map pins. The business critical input format is CSV BUT one client uses XML, the display format can be either tabular OR spatially rendered. Export can be CSV OR XML.

This charter has at least four pivot points in it where your testing will need to branch. After creating a charter, look for changes in direction, see how comfortable you are with your pivots. This signal is common beyond charters, I see it often in user stories and the like. Questioning the presence and meaning of logical operators is a behaviour I see in many effective testers.

"Can I hold it in my head?" - Our brain only has so much capacity. We all have our individual page size. Consider the charter above. Would be be able to hold all that in your head without decomposing while testing? Would you be able to effectively test it in one session? The answer is (probably) that one cannot.

Is there something simple that can be done?

You can vary the length of your leash. A time limit of your choosing to leave the main mission and explore around the functionality, returning once the limit has expired.

Sessions too narrow? Give yourself a longer leash allowing for exposure to the edges of the charter, then snapping back to the mission at hand.

Sessions too wide? Shorten the leash, keeping you within touching distance of parts of the charter you can reach realistically within the session you have defined.    

This variable leash approach enables progress while also refining the focus of your charters on an iterative basis. As we explore and learn, more effective ways to decompose the system under test will present themselves. The testing story emerges as you move throughout the system under test, the challenge is to find the right balance of focus, to ensure that we are not hoodwinked by 'The Fallacy of the Single Point.'

Monday, 26 May 2014

Reviewed - The Testers Pocketbook by Paul Gerrard


I had heard a great deal about this little book. Some who had read it appreciated its premise, some were in fairly fundamental disagreement. If a text generates polar opposites of agreement, then that immediately piqued my interest! So lets begin with that premise:
"A Test Axiom is something we believe to be self evident and we cannot imagine an exception to it"

I feel this is indeed a risky premise for an approach to testing, could be easily misinterpreted as a set of iron laws to be followed, which will magically output an appropriate approach to testing. With this in mind I set about the enjoyable challenge of the dissection of these axioms. Lets take for example:
"Testing requires a known, controlled environment"

There are absolutely benefits to this statement, but also damaging flipsides to your test approach. A known, controlled environment is limited in variance, therefore only able to expose bugs of a certain nature. In addition, tests run in an environment such as this can be false signals, as the variance and scale of the real world changes outcomes.

On the reverse of this, I found a number of 'axioms' more challenging:
"Testing needs stakeholders"

I can imagine a great deal of variance here in terms of who the stakeholder is, their agenda and beliefs but testing without an audience? Can I imagine where this is not axiomatic? Stakeholders may see testing as a 'hoop to jump through' rather than an a skilful way of providing them the information they need, and feel they don't need testing, but testing needs stakeholders to provide context for scope and priority decisions.   

The 'Axioms' form part of the 'First Equation of Testing':
"Axioms + Context + Values + Thinking = Approach" 

I found this to be another challenging assertion, as the application of axioms in an equation could be interpreted as a formula for success, whereas the real challenge of testing exists with the spaces between the constituent parts of the formula and how they interact. I see danger in creating formula's and checklists for testing, as it perpetuates the linear, tickbox impression of testing as a craft. In fairness to the author, the overall tone of the piece encourages the application of the axioms and formula as part of a wider toolkit. 

Although I found myself disagreeing with (what I saw as) the overall premise of the text, the author strongly advocates a focus on stakeholders and, by extension, providing the right information at the right time to those who make decisions. These sections are well worth reading and paying attention to, I certainly have applied some of those ideas to my recent work and provided an excellent framework for thought and approach. The author builds from a fairly formal approach to testing to give due attention to the spectrum of formality and the value of a varied approach. Initially I felt the text suffered from a lack of acknowledgement of the complexity of contemporary systems but this grew as the text progressed, which helped to provide a more rounded view of testing.

I found the authors real world experience shone through towards the end of the text, the reality of delivery is evident, although I think the author leans too far towards testing being concerned with the cost of failure rather than the benefit of the system meeting its mission. Both are important but I prefer the positive side of this coin and I believe testing enjoys a better profile when approached from this standpoint. 

Thoroughly recommended read for testers of all experience and seniority, I will use some of the 'axioms' as part of my testing toolkit, although with an eye on their fallibility. I'll end with my favourite 'axiom', which is certainly one I believe in:
"Testing never finishes. It stops."