Sunday, 13 July 2014

The name of the thing is not the thing


I often ponder the question 'should we care about what we call things in the software development world?' One could argue that as long as everyone has a common understanding, then it shouldn't matter right? I rarely see a common understanding (which is good and bad in context), suggesting that we do care enough to name things but sometimes not enough to care about the amount of precision those names have.

Gerald Weinberg quotes in the excellent 'Secrets of Consulting' that 'the name of the thing is not the thing.' As a tester (and critical thinker) this represents a useful message to us. The name given to a thing is not the thing in itself, its a name and we shouldn't be fooled by it. This is a useful device, as I believe the name is an important gateway, to both understanding and misunderstanding, and names take root and spread.....

Testing is not Quality Assurance

There are probably a great many blogs about this, but I hear/see this every day, so it needs to be said again (and again, and again).

The rise of the phrase 'QA' when someone means 'testing' is continuing without prejudice. Those of us who have the vocabulary to express the difference are in a constant correction loop, considered pedants at best, obstructive at worst.

What is at the root of this? The use of interchangeable terms carelessly (where there is no paradigm for either side of the equation and/or a belief there is no distinction), then wonderment at how expectations have not been met. 

So how do I respond to this misnomer?

(Counts to ten, gathers composure) 

Superficially - 'Testing cannot assure quality, but it can give you information about quality.'

If someone digs deeper?

Non superficially - 'When I tested this piece of functionality, I discovered its behaviours. Some are called 'bugs' which may or may not have been fixed. These behaviours were communicated to someone who matters. They then deemed that the information given was enough to make a decision about its quality.'

This feels like a long journey, but one worth making. I will continue to correct, cajole, inform and vehemently argue when I need to. If the expectations of your contribution are consistently misunderstood, then will your contribution as a tester be truly valued?

Test Management Tools Don't Manage Testing

On a testing message board the other day (and on other occasions) I spotted a thread containing the question; 'Which 'Test Management Tool' is best (free) for my situation?' There are many different flavours, with varying levels of cost (monetary and otherwise) accompanying their implementation.

I thought about this statement. I came to the conclusion that I dislike the phrase 'Test Management Tool' intensely. In fact, it misleads on a great many levels. On a grand scale, as its name does not describe it very well at all. It offers no assistance on which tests in what form suit the situation, when testing should start, end, who should do it, with what priority, with which persona. Not sure such a tool manages anything at all. 

So what name describes it accurately? For me, at best it is a 'Test Storage Tool.' A place to put tests, data and other trappings to be interacted with asynchronously. Like many other electronic tools it is at worst it is a 'Important Information Hiding Place.' To gauge this, put yourself in another's shoes. If you knew little about testing and you were confronted with this term, what would you believe? Perhaps that there is a tool that manages testing? Rather than a human.

So what.....?

So, whats the impact here? I can think of a few, but one springs to mind.

If we unwittingly mislead (or perpetuate myths) by remaining quiet when faced with examples like the above. How do you shape a culture which values and celebrates testing? By not saying anything when what testing is and the value it adds are diluted, misrepresented and denigrated certainly helps to shape that culture. Into something you might not like.

Friday, 4 July 2014

Software Testing World Cup - An Experience Report



After much anticipation, myself and three of my colleagues embarked on the Software Testing World Cup journey in the European Preliminary. We had prepared, strategised, booked rooms/monitors, bought supplies and all the other (actually quite long list) of tasks to get ready for the big day. Armed with the knowledge that I would be jetting off on holiday the following day, we entered the (metaphorical) arena to give it our all and hopefully have a little fun. Here are my thoughts about 3 interesting (exhausting) hours.

When I reflect.....

  • Over my testing career, I have learnt to really value time to reflect. Study a problem, sleep on it, speak to peers for advice, come up with an approach. The time just doesn't really exist (in the amount that I needed it) during the competition, which made me uncomfortable. A little discomfort can teach you a great deal, and indeed amplified the more instinctive part of my testing brain.
  • Following on with the above, I'm happy to say I kept my shape. When your instinctive side (coupled with the deep rooted, long learned behaviours) becomes more prevalent, you can, well, go to pieces a little. I didn't. I listened to the initial discussions with the Product Owners, stuck to time limits, continued to communicate, maintained the Kanban board we had set up, all healthy indicators of some useful learned behaviours!  
  • We did quite a lot of preparation and research. We met up a couple of times as a group to discuss approach and the rules of the competition which helped massively, it helped me to discuss the rules as a group and we quickly built a common understanding. Our preparation went beyond the competition, discussing bug advocacy, the principles of testing in a mobile context to name but a few. However, as we know, very few strategies survive first contact, and our overall strategy was no exception! 
  • HOWEVER, I do believe we pivoted our strategy nicely on the day, enabling us to broaden our focus due to the scale of the application and number of platforms. As a team, we decided to familiarise with each area (we had broken down into chunks) on our desktops within a browser, then move on to a specified mobile device (given a steer that iOS would be critical).
  • Finally, I thought it was a really great thing that we decided to be in the same room as a team, really boosted our ability to validate each others defects and check in at important times, such as when we were adding to the report.

Now, about the competition itself......

Good!

  • Adding a mobile aspect really created fertile ground for bugs. In fact, I could have raised bugs for the full 3 hours, but the competition was about much more than that. This made the challenge a little different, as it would have been easy just to bug away and lose all perspective. 
  • The small hints before the preliminary were helpful too, allowing us to queue up devices and reach out to our colleagues who had done mobile testing in depth.
  • We had our HP Agile Manager (good grief, the irony in that title) test login's nice and early which was really helpful for familiarity, although a part of me wished I could have tested that system instead! We got logged in to the real project on the day without any issues, although I'm not sure it was the same for everyone. 

Could be better.....

  • A narrower focus of application would have improved the quality and challenge of the defects. To slightly contradict the above, the scope of the application under test was TOO wide! Perhaps a narrower challenge with slightly more gnarly, awkward bugs to find, I guess I felt I didn't have to work hard (at all) to find bugs, never mind the most important ones.
  • Engaging with the Product Owners was a challenge. While I can see that having one giant pool of questions was advantageous to the wide dissemination of information, I would have liked to have seen teams assigned to one (or a pair of) Product Owner. This would have enabled building up more of a rapport, especially as this was one of areas teams would be judged on. 
  • Practically speaking, the start was a little chaotic, moving from streaming URL to streaming URL, but after 10 minutes or so we go there. This reflects so many experiences in the software development world (projects) where we need to find our rhythm.

I think I (we) could have done better. However I always think that about everything I do, part of what keeps me pushing forward with my career. To participate was the key here though, plus I always appreciate a little testing practice, now I'm a little more 'senior' don't always get the chance!

Friday, 30 May 2014

The Fallacy of the Single Point

 

Ever heard of the 'Fallacy of the Single Cause?'

It refers to the rarity of single causes resulting in particular effects, it turns out the world is more complex than that. Many different inputs are required to created the blended and various outputs we see in the world around us. Some may contribute more than others and at different times, but as a rule of thumb for life (and testing), pinning your hopes on one cause, is likely to leave you disappointed.

We communicate in stories, but what's the point?

This fallacy has been refined to apply to the penchant for storytelling that is intrinsic to how we communicate. The question is this. How often do you listen to a story and you take away a singular outcome or learning? Thing is, the end of a narrative is only part of that journey, a great many stories express many subtleties as they progress, especially that rich vein of intrigue and oblique learning, reality.

In my eyes, this ability to tell a story has always been critical to testing, whether in the act of testing or reflecting afterwards. 'The Fallacy of the Single Point' has significance here too. As a young tester, I thought I had found a simple formula. Surely, if you cover each requirement with one test (with a variable degree of length/scope), then you will have fulfilled the testing mission for that product? My approach tried to short circuit subtlety rather than acknowledge and compliment it. While a multi-themed narrative unfolded, I was focused on a single point on the horizon.

So, what does this mean in a testing context?

A test which proves a single point has its intoxications. It allows your mind to partition, consider a test as complete, which as far as state is concerned is unhelpful. The inherent complexity of the systems we test create an intense flux in state, making it as fallible an oracle as any other. Imposed narrowness gives rise to blindness, missing the peripheral aspects of a test, lurking just out of plain sight but affecting the outcome nonetheless. The narrowness of that approach also hampers the effective discovery and description of bugs and issues which require clarity, as the wider picture is relegated to the background. 

The opposite of this argument should also be considered. Often I will see tests which prove, this, that, the other and a little extra besides. This is often indicative of a faux efficiency (always the poorer cousin of effectiveness), but at the cost of maximising cerebral focus required for a test, try to maintain an eye on each aspect of a multifaceted test. Usually more than us mere humans can effectively handle, resulting in the crucial detail being missed or link being made.



How do we know if this is happening?

Let us use Session Based Testing as our context, with a greenfield application, where we have very little history or domain background.

When determining charters for sessions, especially early in the testing effort, we may find our focus being overly narrow or wide. There are a number of signals we can look out for to give us information about the width of our focus.

If the charters are too narrow:

"We're done already?" - Imagine a 120 minute session, part of a number of charters to explore a particular piece of functionality, focused on a business critical requirement. Suddenly, 30 minutes in, you feel like it may not be valuable to continue. Make note of this, it may be a natural end to the session but it could also be an indicator of narrow focus.

"Obviously Obvious" - You have a charter on a specific requirement and the session passes without incident, perhaps a few areas for clarity. Someone looks over your shoulder and says "well, that over to the left is obviously broken!" You've missed it. Again, make a note. Perfectly possible that another pair of eyes spotted what you didn't but it may be a signal that you have been too narrow in your focus.

If the charters are too wide:

"Too Many Logical Operators" - Your charter might look like this:

The purpose of this functionality is to import AND export map pins. The business critical input format is CSV BUT one client uses XML, the display format can be either tabular OR spatially rendered. Export can be CSV OR XML.

This charter has at least four pivot points in it where your testing will need to branch. After creating a charter, look for changes in direction, see how comfortable you are with your pivots. This signal is common beyond charters, I see it often in user stories and the like. Questioning the presence and meaning of logical operators is a behaviour I see in many effective testers.

"Can I hold it in my head?" - Our brain only has so much capacity. We all have our individual page size. Consider the charter above. Would be be able to hold all that in your head without decomposing while testing? Would you be able to effectively test it in one session? The answer is (probably) that one cannot.

Is there something simple that can be done?

You can vary the length of your leash. A time limit of your choosing to leave the main mission and explore around the functionality, returning once the limit has expired.

Sessions too narrow? Give yourself a longer leash allowing for exposure to the edges of the charter, then snapping back to the mission at hand.

Sessions too wide? Shorten the leash, keeping you within touching distance of parts of the charter you can reach realistically within the session you have defined.    

This variable leash approach enables progress while also refining the focus of your charters on an iterative basis. As we explore and learn, more effective ways to decompose the system under test will present themselves. The testing story emerges as you move throughout the system under test, the challenge is to find the right balance of focus, to ensure that we are not hoodwinked by 'The Fallacy of the Single Point.'

Monday, 26 May 2014

Reviewed - The Testers Pocketbook by Paul Gerrard


I had heard a great deal about this little book. Some who had read it appreciated its premise, some were in fairly fundamental disagreement. If a text generates polar opposites of agreement, then that immediately piqued my interest! So lets begin with that premise:
"A Test Axiom is something we believe to be self evident and we cannot imagine an exception to it"

I feel this is indeed a risky premise for an approach to testing, could be easily misinterpreted as a set of iron laws to be followed, which will magically output an appropriate approach to testing. With this in mind I set about the enjoyable challenge of the dissection of these axioms. Lets take for example:
"Testing requires a known, controlled environment"

There are absolutely benefits to this statement, but also damaging flipsides to your test approach. A known, controlled environment is limited in variance, therefore only able to expose bugs of a certain nature. In addition, tests run in an environment such as this can be false signals, as the variance and scale of the real world changes outcomes.

On the reverse of this, I found a number of 'axioms' more challenging:
"Testing needs stakeholders"

I can imagine a great deal of variance here in terms of who the stakeholder is, their agenda and beliefs but testing without an audience? Can I imagine where this is not axiomatic? Stakeholders may see testing as a 'hoop to jump through' rather than an a skilful way of providing them the information they need, and feel they don't need testing, but testing needs stakeholders to provide context for scope and priority decisions.   

The 'Axioms' form part of the 'First Equation of Testing':
"Axioms + Context + Values + Thinking = Approach" 

I found this to be another challenging assertion, as the application of axioms in an equation could be interpreted as a formula for success, whereas the real challenge of testing exists with the spaces between the constituent parts of the formula and how they interact. I see danger in creating formula's and checklists for testing, as it perpetuates the linear, tickbox impression of testing as a craft. In fairness to the author, the overall tone of the piece encourages the application of the axioms and formula as part of a wider toolkit. 

Although I found myself disagreeing with (what I saw as) the overall premise of the text, the author strongly advocates a focus on stakeholders and, by extension, providing the right information at the right time to those who make decisions. These sections are well worth reading and paying attention to, I certainly have applied some of those ideas to my recent work and provided an excellent framework for thought and approach. The author builds from a fairly formal approach to testing to give due attention to the spectrum of formality and the value of a varied approach. Initially I felt the text suffered from a lack of acknowledgement of the complexity of contemporary systems but this grew as the text progressed, which helped to provide a more rounded view of testing.

I found the authors real world experience shone through towards the end of the text, the reality of delivery is evident, although I think the author leans too far towards testing being concerned with the cost of failure rather than the benefit of the system meeting its mission. Both are important but I prefer the positive side of this coin and I believe testing enjoys a better profile when approached from this standpoint. 

Thoroughly recommended read for testers of all experience and seniority, I will use some of the 'axioms' as part of my testing toolkit, although with an eye on their fallibility. I'll end with my favourite 'axiom', which is certainly one I believe in:
"Testing never finishes. It stops."

Thursday, 8 May 2014

Lets celebrate! Anyone still out there.....?



Pyrrhic victory. I was reminded of this term a few days ago. 

It is when winning decimates *almost* everything, so winning is basically not worth the cost exacted to achieve it. I believe I have seen this effect on teams during and after very long development projects, the dreaded 'death march.' The projects aims might be valuable and completely worthwhile, but at what cost?

Sometimes, the stresses and strains of such endeavours decimate the team tasked with delivery. Relationships are strained or break, enthusiasm is replaced with cynicism, previously open minds are closed to protect for harm and monotony. Previously conquered silo's re-embed themselves.



Consider those precious 'T-Shaped' people, who are consistently pushed to their limits and burn out, or retreat back into their shells. As a complement to the determined specialist, these guys (and encouraging more of them to flourish) are the key to unlocking effective delivery. Their flexibility and enthusiasm are their best qualities and worst enemies in this context.

So before you embark on the 'next big thing' (with emphasis on the big), take the time to consider its impacts on the humans who deliver it and split it into manageable but valuable pieces. Or you might be left with a delivered project, but no one willing (or even around) to celebrate it. 

Tuesday, 6 May 2014

Reviewed - The Effective Executive by Peter Drucker


I'm always slightly sceptical of the phrase 'timeless' when it comes to management literature, given the infinite variance of people and the situations we find ourselves in. The Effective Executive was described as exactly that by the excellent Manager Tools podcast and I found myself in front of a well know online store ordering a copy.

Overall, it struck me immediately the sparseness and matter of fact nature of the language used by Drucker, although that sparseness expresses the practical nature of the guidance given, starting with managing one's time.

The reality of time is that it is the one thing (on an individual level at least) that you cannot gain more of. Drucker's message is quite bleak at first but the reality of it I will not contest, most executives I know will admit to rarely being able to focus on the critical issues as they are drawn in varied directions to tend to the issues of today, where they may be better served focusing on tomorrow. Indeed that is their primary function. Tracking time to a micro level, I find, is not natural to most. I am vaguely aware of where my time goes on macro level, although I can imagine areas of ineffectiveness lurk which could be righted. Drucker's advice here is well founded, although I believe ideas of slack and long leash learning would be a welcome addition to his time model, even for executives.

It is in the focus on contribution where Drucker's text begins to come alive. Whereas I see most executives focusing on the mechanical process of delivery and management with the  goal of efficiency in mind, Drucker posits that this is sub-optimal. Instead, key concepts and principles should be the domain of the executive, aided by analysis of domain and problem with the aim of results in mind. In particular the question of whether of not an event or problem is a paradigm shift for the organisations, focusing on root causes rather than symptoms.

Another idea which spoke strongly to me is that an executive should seek to utilise a person's strengths, rather than focus on their weakness. As in if a person who has been hired in a management capacity but has a natural aptitude for sales, then use them in that capacity rather than bemoaning their operational shortfalls. As a person with a predominantly practical aspect their personality this appeals to me, as opposed to the long drawn out process of the maintaining the status quo.

Reality (as I in a reality painted by Drucker which I subscribe to) is prevalent within the text. None more so than in its description of enduring leadership, as opposed to the flash of genius leadership. Effective leadership is grounded in determination as few of us possess the brilliance required to effect significant change instantly. Some may see this as another bleak message in a world where we are told anyone can do anything. It is not delivered as such, only the austere thought that if genius were needed everywhere, progress would be slow indeed! Encourage effectiveness so the ordinary can produce extraordinary results was the message I took away. 

Effective decision making is covered in some depth, with a great many useful techniques to take note of and use. The area that struck me most was disagreement. In most organisations, everyone needs to be 'on board' or 'on the same page.' Disagreement is needed to be effective, otherwise we have a danger of making decisions of shallow agreement which do not stand up to serious scrutiny. I have noted that many executive relationships I observe appear to be brittle and don't welcome constructive challenges (not withstanding the non constructive challenges of course). Drucker's argument here resonates in the software development world, where challenge is seen as blockage and being 'the guy that asks awkward questions' is a lonely, lonely place.

All of Drucker's arguments are based on the principle that self-development is the path to effectiveness. Some lessons are learnt easy, others hard but I am in agreement that effectiveness comes from more from within than without. I feel (like Weinberg's Secrets of Consulting), I will learn more from this book with experience, as my own self-development progresses. Lets see how I feel about it in a few years......

Saturday, 29 March 2014

The bigger the rock, the smaller the pieces need to be



You know what I really, really value in a fellow professional in the information technology delivery world? That special, magical ability to decompose a large (and potentially complex problem) into small, simple subtasks.

A child can do this right? This is 'Being a Human Being 101.' So why is it a behaviour that eludes a large percentage of those in the information technology industry. This is a trait of people who I like to call 'people who get things done.' Not through heroism or great feats against monolithic bureaucracies, but a simple application of critical thought.

Is there a problem here? 

People like the idea of building big stuff, stuff to "get hold of", its very grand to say we're building an "enterprise level" application. In that vein, I hear "well, this a step change to the product" or "there is no value in splitting up the project into smaller deliverables" on a regular basis. The justifications of the desperate, determined to protect bloated road maps which perpetuate their own existence.

At its root, the real problem with big stuff is that its is counter to how our brains actually work. We are overwhelmed by it, we cannot hold it within our puny cerebrums. Small stuff is natural, we can encircle it with thought and apply ourselves to it. We can be happy that its done, or at least that its time to stop.

If you are going to be marching for a year, you need plenty of opportunities to stop off on the way. Save it all up for one payload and you are likely to trudge forwards with your eyes to the floor for a large part of the journey. Your destination may well be on the other side of the horizon before you realise. 

So why do I see this all around me? 

Aside from my own bias, its actually a thing which takes thought and effort. It's easier *right now* just to plough on and not consider how an entity can be decomposed. At least that shows progress right?

Wrong. This stems from the perception that skilful decomposition is perceived to be responsible for initially 'slowing down' a delivery, while a slice of functionality is built. Speeds up your ability to generate feedback though. Which then means that you are more likely to deliver the right thing. Which, from experience, means you build what's needed, rather than spending time on what isn't.

Can someone be explicitly taught this ability?

I believe so, although its rarely that simple. At its heart is the ability to recognise flows, change the angle of approach when required, and the application of systems thinking. Decomposing complex systems or problems into simple rules of thumb is critical to an iterative delivery. 

I always like the thought of splitting an entity by the questions you wish to answer about it. Or consider the simplest thing you can do to overcome a constraint, expose information about risk or deliver customer value. I always imagine the entity as sphere and I can go anywhere around its surface. Eventually, I'll see the angle of approach. Hey, its the way my mind works. I have to apply the mental brakes, think, rather than plough on. Its taken some practice and discipline on my part.

This ability enables that most precious of habits, that of delivery of value. For now, the delivery of unvalue is pervasive to my eyes, but I'll strive to ensure that this special but underrated ability continues to have a place in the world.