Sunday, 22 May 2016

Tester in Development

After I shared this on Twitter a little while ago:

I got to thinking about the 'Developer in Test' pattern which has gained in popularity over the last 15 years of my testing career (but probably existed way before that), so here is my expansion on the point. 

Who in their organisation has one (or many) of these?
  • Technical Tester
  • Developer in Test
  • Software Engineers in Test
  • Software Development Engineers in Test
  • Software Engineers in Test Levels One, Two and Three

* Note 1 - As a sub pattern, it seems that the job title for this pattern has got longer over time, although I am being a little naughty with data. However, a would it be the first time a (sometimes) badly defined job got a lengthy title to give it some credence?
** Note 2 - I have never heard of a Programmer in Test. Let me know if anyone has seen/lived this. It may have the benefit of at least describing the primary activity of the job, although maybe testing should be that primary activity.
*** Note 3 - I would very much like to be known as a 'Tester in Development' from now on. Which would be a ludicrous title right? Suggesting that I am some kind of foreign body, rather than part of how good stuff is built? Although, read in a learning context, we are all testers in development.

I get the feeling there will be a few nods out there. 

I guess we could argue about a few of these titles. However, for me, testing is a technical profession, as technical skills are acquired via learning and practice, specific to that profession or the task at hand. Hell, most people who have a profession have theoretically at least acquired specific technical skills, rendering them technical, with what proficiency is a different debate. Testing, I don't see it as an exception. But, you know, testers need to more technical...

I digress.

Patterns within Patterns 

If you do have one of the specific titles above in your organisation, there may be one or more of the following patterns swirling around in that larger pattern:

Not hiring developers who value testing...

Wrestling with your developers about delivering testable chunks of functionality? Thrown over the wall at the last minute? No unit testing? 500 errors when you load the new site (I'll let you decide if I mean http errors)? Unfortunately, developers who do not value testing are still very much part of the software development world. And if you don't determine their propensity to value testing at hiring time, your life will get very interesting in the next few months. So lets just hire a Developer in Test to bridge the expanding gap, rather than those who view testing as a key part of building things.

Developers don't have time to create tools for testing...

Suppose you get the first bit right and you hire a bunch of developers that value testing. You, as a tester, identify a system which you need to be able to control and observe closely to discover information about risk, maybe even save a bit of time down the way, a little early discovery. But no one has time for that. Sounds like a great idea, but we need to be the first to bust open the Iron Triangle of cost, time and scope to deliver this thing. Let's get a Software Engineer in Test to do it. This might get the job done but it also might point to a deeper sustainable pace problem.

Testers playing to weaker skills over strengths...

Programmers are, for the most part, really good at programming and they practice lots. Testers, while a great many can program, are probably less effective at it, they practice less. While I'm not saying testers shouldn't program, but the power of a joint approach to say, building a check automation framework or a tool to observe the health of a system, creates an opportunity to play to strengths on all sides. I can program, but the developers I work with are better at it than me. I encourage them to do it, and work with them to make meaningful checks and information diviners. Or we can compartmentalise that work into a Technical Tester, reduce the level of engagement and probably subtly miss the point from a programming and testing point of view.

Tool aware but technologically unaware testers...

A developer once said to me:
"Ash, now the testers have got hold of {insert popular service layer testing tool here}, it's like having a rail gun full of XML indiscriminately firing the wrong thing at the wrong endpoints."
Well, I'm all for a bit of randomness in testing, it certainly tipped out a boatload of error handling goodies. However, it points to a wider phenomenon. Testers becoming tool aware but lacking technological awareness. As in, I have a means of invoking this RESTful service but I don't understand the structure of it, its strengths and weaknesses, where I need to probe. So, this type of problem might be farmed to a Software Developer in Test Level 3. At the expense of the learning required by others, which just might help them on the journey of becoming technologically aware.

Local Optimisations

Technical Tester, SDiT, SDEiT Level X. Call it what you want, it's still one of my favourite local optimisations in testing, the system may need to fill that role, but we do so by giving someone a job. Based on the patterns above, we may be missing the bigger picture.

Wednesday, 20 April 2016

By the community, for the community

I was going to write a blog about the various activities we participated in on a fantastic day at the Leeds testing community's second Atelier. Trust me they were great, but a comment I received the day after intruiged me more. An attendee said to me:

After the conference yesterday, I realised that "what are testers?" is a much more interesting question than "what is testing?"

I dug a little, asked for the deeper reasoning:

Well, it's event like yesterday where you see testers are a far more diverse bunch of people than any other field of IT I can think of.

I was knocked back, I believe this was what we wanted to reflect with this event all along.

When I consider, we chose a venue which reflected our values. Wharf Chambers, a workers co-op which is run with fairness and equality at its heart. Every comment received was that the relaxed atmosphere enhanced the event, people happily participated, asked questions. For me, much better than a hotel, auditorium or meeting room.

Then look at the sponsors, Ministry of Testing, I need say no more. The Test People. I was forged there to be honest, it will be with me forever. Callcredit. I was inspired there. Then, a chap called Tony Holroyd withdrew some money from his personal development fund and gave it to us. To me, that is amazing.

Our contributors were of all ages, genders, nationalities, experience levels, backgrounds, disciplines, testers turned entrepreneurs, agile coaches, graduates who had been through the 'testing academy', an increasingly common phenomenon.

I can't thank everyone who was part of the day enough, it's a long list, you know who you are.

Now to rest, then reflect and look to the next Atelier... 

Wednesday, 9 March 2016

Scale Panic

Cast your mind back. 

The last time you were exposed to a new system how did you react? Was it a time of hope and excitement? Or of anxiousness and nagging dread? If it was the latter, you may have been suffering from something I like to call 'scale panic.'

First up, let me define what I mean by this in a testing context:
'When a tester first encounters a new system they need to understand and evaluate, the cognitive load from understanding the parts, interactions and boundaries is too intense. The tester often enters an agitated and slightly disgruntled state of paralysis for some time after scale panic has taken hold.'
The subsequent testing approach and effectiveness can suffer as the appreciation of the big picture means crucial context is often left undiscovered. So, why does this phenomena occur I hear you ask. Well, every situation is different of course but I believe it is centred around a key trait for a tester, awareness. To clarify, let's decompose:

  • People Awareness
    • Testers have many sources of information. Documentation, systems, financial reports, marketing information, the list is long. However, people generally generated this stuff and are worth getting to know. If your oracle gathering strategy doesn't include people, then you are most likely missing out on a crucial aspect of the system puzzle. If the majority of oracles were generated by people (or indeed are people), your questioning and communication skills are even more important than you probably think.
  • Technical Awareness 
    • I really don't want to disappear down the testers being technical rabbit hole. That argument is grey and for me, lacks clarity, and would need a whole other blog/book/mini series. Not what I'm after here. As a tester, do you understand the strengths, weaknesses and quirks of a given technology? What is a relational database good at? What is a document database terrible at? Without these relative basics, the cognitive load of a new system can be too much to bear. Learn the basics of technology and a new system? Big rocks to crack...
  • Depth Awareness
    • Man, I have a detail problem. Do I need to understand every little bit of every little thing before evaluating something? Sometimes. Does that get me into trouble? Oh yeah, but thankfully not like it/less often than it used to. I have learned that detail is useful, but when it is useful is the real kicker. One of the key habits for a tester is to put yourself on a leash. Whenever you feel yourself diving deeper than the situation demands or  deliverance distance from the beaten track, snap back to reality. Personally I use testing sessions with a timed leash, which I enable before going off road.
  • Temporal Awareness
    • Well, if testing is about shattering illusions, here's a biggy for pretty much everyone. Systems change over time. In fact, its pretty much the reason we are all in a job (and sometimes sadly out of a job), so when looking at the big picture of a system, accept that time will pass and it will change. Your understanding is an oracle, which is fallible and subject to entropy, as information erodes over time. This is natural, the key is to accept this cascading fallibility. When overcoming scale panic, accept you are taking a snapshot at a moment in time and question that accordingly as you test. 
  • Existential Awareness
    • When in the grip of scale panic, I see testers making small parts of systems move, which may give the illusion of progress in understanding but feeds an obsession. The obsession of how it works over what it is made up of and why it exists. I assert that how something works gives you a small insight into the scale of a system, but what it is made up of and why it exists give you a wider picture. Understanding the what and the why assists a tester to determine if the problem is solved at the same scale as the system and aligned with its purpose, rather than just what I see works in some way in an unknown context.

I've been thinking about this a lot. Mainly because as (mostly) a career consultant, I encounter new systems all the time. I have built a personal model, expressed above. Next time you encounter a new system, think awareness. Think about the people, technology, depth, time and existential aspects of the system. Then map that, a useful output for everyone, and a challenge to existing assumptions. 

Use it as a guide for your testing, always remembering that it's a model. It's wrong, but it's very useful.

Thursday, 21 January 2016

(Almost) Total Recall

Alas, this blog post is not about Arnold Schwarzenegger's classic film. And it's certainly not about Colin Farrell's ill advised remake. Leave my childhood favourites alone now please. It's about how small devices to aid your memory can change your testing outlook. You won't be a secret agent (if Quaid/Hauser ever was) but happily you won't have to extract tracking devices from your nose so you don't need to wear a wet towel on your head (if Quaid/Hauser ever did). Anyway, time to get to the chopper (I know)...

When you hear the crunch, you're there...

Test strategies, plans and policies are such a loaded terms nowadays.
Some testers pine for the days of the weighty documented approach, some have little strategy and rely on their domain knowledge alone. For me, I prefer a third (or more) way, enhancing my toolbox for the context I find myself in.

Open your mind...

Mnemonics might be an approach to help with consistent, rigorous thinking about what will be tested, without over specifying how this might be done, and documenting in a time-punitive fashion. They may allow us to build maps of our testing interactively in a way that may appeal to stakeholders more than (for example) tabular information...
I present the below as a very quick guide for one to two hour introductory sessions for clients, who had a problem with weighty or distinctly wafer-thin thinking around testing.

See you at the party Richter...

Maps can be found here
First, some critical thought about whether mnemonics are such a good thing (or not) in a testing context:

Some examples across certain disciplines. To introduce the flexibility of the approach:

Get the group to create an example using their own context. Plus a few potential weaknesses that I have discovered in the approach when applied to the world at large...

Finally, get the group to think about create their own mnemonic in their context. I have added a starter few aspects that teams might want to take into account when building a mnemonic:

Do you think this is the real Quaid? It is...

Then it's over to you guys. The real joy for me is crafting your own, but as far as training wheels go, there is loads of material out there. Start here and expand to fill the space: 

Sunday, 17 January 2016

State of Testing Survey 2016 - Get Involved!

Big Picture

Organisations and individuals often attempt to capture the state of the testing craft. 

Whether using a limited dataset (their organisation/group of organisations/their clients), or more anecdotally as an individual ('this is what I think the state of testing is, based on what I see and feel'). I, for one, would love to see a more holistic picture of what testers think about testing.


As testers we should be able to report the status of our testing at any moment, I think we should be able to do this on a wider scale. The State of Testing Survey 2016 is an endeavour which will attempt to capture the snapshot of said state, and hopefully build up a dataset over the next few years which will show us how we as a craft evolve.

Get Involved!

Anyway, time for the superliminal message:


(If you don't, people will read the 'World Quality Report' and believe that instead. I don't want that. You don't want that. Trust me).

And who knows, maybe in the future we could get those of other disciplines to do something similar about how they view the state of testing. Now that would be an interesting read!

Tuesday, 27 October 2015

A single source of testing truth...

Truth. Oscar Wilde said it best I think:
'The truth is rarely pure and never simple.'

In terms of vehement debates at the recent MEWT gathering in Nottingham, probably the talk which generated the most feedback and opinion was Duncan Nisbet's ‘The Single Source of Truth is a Lie.’ To be honest I was relatively quiet during the debate, as it was straight after my talk and I also need time to parse such things, hence this blog. 

A link to the slides can be found here:

What Duncan said in my head…

Those were the slides, this is how I understood the talk given by Duncan. First up, there was an admission by Duncan he was just putting this one out there for feedback, which is kind of the point of MEWT really. Second up, there was belt and braces, a definition of truth:
‘Conformity with reality or fact’ or can be otherwise known as ‘verity’
Thus began the front loading of the mind with terms with multiple, deeper meanings depending on their context. To be fair to Duncan, when he picks a subject, he doesn’t dance around on the edges. Truth. 

He used the ‘Three Amigos’ device as a way to introduce the topic, namely how can a session such as that generate a single source of truth, a shared understanding that can be taken and built. This might be living documentation (a la Specification by Example), which is semantically similar to defining some acceptance tests to drive the development. However it manifests itself, is it possible to define truth for a given feature/function/context/situation?

I believe the gist of the talk and the debate was a loose consensus that truth is a multi-faceted beast. Duncan credited James Bach with the following (the three pillars below are littered throughout many disciplines (social work, medicine, mental health) as a model in their own right). The truth is made up of (as I understood):

  • Social truth – conformity with realities commonly held within/across team/social strata;
  • Psychological truth – conformity with realities held individually within one’s psyche;
  • Physical truth – conformity with the reality presented by a physical artifact, such as your product.

Models of truth on top of models for testing…

Duncan then overlaid this on top of the Heuristic Test Strategy Model:

  • Social Truth ↔ Project Characteristics – linked by a need for shared understanding about the feature/function/context/situation;
  • Psychological Truth ↔ Quality Characteristics – linked by how one might feel about the feature/function/context/situation;
  • Physical Truth ↔ Product Characteristics – linked by the production of artifacts pertaining to the feature/function/context/situation.

This resonated with me, I could mentally link this version of how the truth might be determined with a model of questions. Which, for myself, is the point, in that having a model which allows you to determine what you might consider to be truth in your context, is rather useful, even if the precision of that determination is fallible.

The debate began afterwards, ranged from definitions of premise and assumption, disappeared down etymological rabbit holes. I think eventually the white flag was waved and we moved on.

Should testers even talk about truth…?

What do I think of this debate then? I will keep it short and simple, in the light of the potential maze this represents. As a point of order, and in concert with the model presented by Duncan, most of what follows discusses physical truth, namely an artifact/product and what it might do. Social and psychological truths deserve tomes of their own.
When the word ‘truth’ is used in a testing context, I generally think of a few things:

  • Words that we, as testers, shouldn’t use. I would probably put ‘truth’ into a similar bucket as ‘full’, ‘complete’ or ‘done.’ You utter these terms and the ground beneath your feet becomes decidedly shaky. Not one I would put in my safety language locker. Mainly because these terms are literally taken sometimes and truth (to some) seems so darned final.
  • We are in the information business, over the decision making business. “This is what it does” may be more of a tester’s domain over “This is what it should do.” After all, never the twain shall quite meet in beautiful clarity. That is not to say a blend of the two is not something to strive for (preferring early test involvement over at the end, ‘as a service’), but we should be mindful of our core principles.
  • Hang on. Have we not already got an approach for this, as in identifying oracles and being aware of their fallibility? Maybe we’ve already done this question. Nothing wrong with revisiting the Oracle Problem, but I believe that approach remains fundamentally sound, and leaves room for context, whereas truth chases the absolute (doffs cap to John Stevenson here, but I subscribe).

Is this (yet) another impossi-task…?

Speaking of absolute, is truth another techni-coloured dream coat wearing, rainbow generating unicorn with diamonds for eyes that we seem to continuously chase in software development? It sounds suspiciously like that process of nailing down ‘stuff.’ Truth seems to me to be subject to change like all other things, and the more we try and pin the blancmange of truth to the wall, the slipperier the world gets. 

We (in software development, and those whose business depend on it) seem to rather enjoy setting ourselves impossible, contrary goals (deliver this huge thing that the world will still want in two years’ time for example) which directly grind the gears of the world. Maybe this is just one of those. We’ll get over it one day.

Certainly made me think. Truth might just be a journey, and not a destination.

Sunday, 18 October 2015

My First MEWT...

Thanks to Wikimedia Commons for the image
A few months ago I got a very intriguing invite from a certain Richard Bradshaw to contribute to MEWT, an event I had been aware of out of the corner of my eye for a couple of years. The event is held at the beautiful Attenborough Nature Reserve and we delivered our reports within the Media Centre, perched in the middle of the lake. A stunning venue, and a great setting for learning.

As well as being my first MEWT, it was also my first peer conference, where experience reports are presented and then the floor is opened to questions, clarifications and comments. After the floor was opened to determine the running order, we took a vote. I'm not going to lie, I had a hangover, after discussing what the time "half eleven" means to a person from The Netherlands into the relatively small hours the night before. This naturally mean't I would be first up. Of course it did.

So I began to talk through my model for surfacing unrecognised internal models, inspired by a number of coachees who spoke subconsciously of their models, struggled to articulate them, and applied them unwittingly. To be honest, this was quite a nervous time. This model had not seen the light of day outside my brain and that of a few of my coachees. It is very personal, like anything one has created, and to expose to scrutiny can be painful. It was not however. Instead I received thoughtful feedback on potential improvements, also some of my less convincing answers prompted me to re-examine my own thinking on some aspects of the model. Areas of feedback which really interested me:
  • Being careful with goals - goals can drive behaviours, perhaps not in the way you intend.
  • Having a step to revisit goals on an iterative basis is valuable, as the world changes around the coach and coachee.
  • Sharing between coachees - all my coachees are on this path, so why not encourage them to share with each other, giving shared learning opportunities and empathy with the journey of others.
  • To visualise the model in some way, as opposed to the mindmap I had. Coaching ebbs and flows, so I think a means of communicating the model in this manner would be valuable.
MEWT has added the following to my blog post list:
  • Testers talking about truth - inspired by Dunc Nisbet, although I will need to take a week off to investigate, parse and articulate this one!
  • Testers improving themselves/awakening to a more intentional, thinking approach - inspired by Ard Kramer and Geir Gulbrandsen - at what point do we wake up and no longer apply rote models of testing to all problems? I know when I did, I hope to explore this further.
I could do them all as all the ideas presented certainly made me think. Maybe someday, but I'll start with those two. All that remains is to thank everyone. Those who spoke, questioned, organised, facilitated, tweeted, discussed and all the other activities that made MEWT 2015 a massive success.

Also see: - who were hugely gracious in their sponsorship of the venue for the event.

Thanks to John Stevenson for this great photo. Tutu's have been removed to protect the innocent.