Thursday, 21 January 2016

(Almost) Total Recall


Alas, this blog post is not about Arnold Schwarzenegger's classic film. And it's certainly not about Colin Farrell's ill advised remake. Leave my childhood favourites alone now please. It's about how small devices to aid your memory can change your testing outlook. You won't be a secret agent (if Quaid/Hauser ever was) but happily you won't have to extract tracking devices from your nose so you don't need to wear a wet towel on your head (if Quaid/Hauser ever did). Anyway, time to get to the chopper (I know)...

When you hear the crunch, you're there...

Test strategies, plans and policies are such a loaded terms nowadays.
Some testers pine for the days of the weighty documented approach, some have little strategy and rely on their domain knowledge alone. For me, I prefer a third (or more) way, enhancing my toolbox for the context I find myself in.

Open your mind...

Mnemonics might be an approach to help with consistent, rigorous thinking about what will be tested, without over specifying how this might be done, and documenting in a time-punitive fashion. They may allow us to build maps of our testing interactively in a way that may appeal to stakeholders more than (for example) tabular information...
I present the below as a very quick guide for one to two hour introductory sessions for clients, who had a problem with weighty or distinctly wafer-thin thinking around testing.

See you at the party Richter...

Maps can be found here
First, some critical thought about whether mnemonics are such a good thing (or not) in a testing context:

Some examples across certain disciplines. To introduce the flexibility of the approach:


Get the group to create an example using their own context. Plus a few potential weaknesses that I have discovered in the approach when applied to the world at large...


Finally, get the group to think about create their own mnemonic in their context. I have added a starter few aspects that teams might want to take into account when building a mnemonic:


Do you think this is the real Quaid? It is...

Then it's over to you guys. The real joy for me is crafting your own, but as far as training wheels go, there is loads of material out there. Start here and expand to fill the space:
http://www.qualityperspectives.ca/resources_mnemonics.html 

Sunday, 17 January 2016

State of Testing Survey 2016 - Get Involved!


Big Picture

Organisations and individuals often attempt to capture the state of the testing craft. 

Whether using a limited dataset (their organisation/group of organisations/their clients), or more anecdotally as an individual ('this is what I think the state of testing is, based on what I see and feel'). I, for one, would love to see a more holistic picture of what testers think about testing.

Snapshot

As testers we should be able to report the status of our testing at any moment, I think we should be able to do this on a wider scale. The State of Testing Survey 2016 is an endeavour which will attempt to capture the snapshot of said state, and hopefully build up a dataset over the next few years which will show us how we as a craft evolve.

Get Involved!

Anyway, time for the superliminal message:

STOP WHAT YOU ARE DOING AND COMPLETE THIS SURVEY RIGHT NOW!

(If you don't, people will read the 'World Quality Report' and believe that instead. I don't want that. You don't want that. Trust me).

And who knows, maybe in the future we could get those of other disciplines to do something similar about how they view the state of testing. Now that would be an interesting read!

Tuesday, 27 October 2015

A single source of testing truth...


Truth. Oscar Wilde said it best I think:
'The truth is rarely pure and never simple.'
MEWT…

In terms of vehement debates at the recent MEWT gathering in Nottingham, probably the talk which generated the most feedback and opinion was Duncan Nisbet's ‘The Single Source of Truth is a Lie.’ To be honest I was relatively quiet during the debate, as it was straight after my talk and I also need time to parse such things, hence this blog. 

A link to the slides can be found here:

https://mewtblog.files.wordpress.com/2015/10/ssot-is-a-lie.pdf

What Duncan said in my head…

Those were the slides, this is how I understood the talk given by Duncan. First up, there was an admission by Duncan he was just putting this one out there for feedback, which is kind of the point of MEWT really. Second up, there was belt and braces, a definition of truth:
‘Conformity with reality or fact’ or can be otherwise known as ‘verity’
Thus began the front loading of the mind with terms with multiple, deeper meanings depending on their context. To be fair to Duncan, when he picks a subject, he doesn’t dance around on the edges. Truth. 

He used the ‘Three Amigos’ device as a way to introduce the topic, namely how can a session such as that generate a single source of truth, a shared understanding that can be taken and built. This might be living documentation (a la Specification by Example), which is semantically similar to defining some acceptance tests to drive the development. However it manifests itself, is it possible to define truth for a given feature/function/context/situation?

I believe the gist of the talk and the debate was a loose consensus that truth is a multi-faceted beast. Duncan credited James Bach with the following (the three pillars below are littered throughout many disciplines (social work, medicine, mental health) as a model in their own right). The truth is made up of (as I understood):

  • Social truth – conformity with realities commonly held within/across team/social strata;
  • Psychological truth – conformity with realities held individually within one’s psyche;
  • Physical truth – conformity with the reality presented by a physical artifact, such as your product.



Models of truth on top of models for testing…

Duncan then overlaid this on top of the Heuristic Test Strategy Model:

  • Social Truth ↔ Project Characteristics – linked by a need for shared understanding about the feature/function/context/situation;
  • Psychological Truth ↔ Quality Characteristics – linked by how one might feel about the feature/function/context/situation;
  • Physical Truth ↔ Product Characteristics – linked by the production of artifacts pertaining to the feature/function/context/situation.

This resonated with me, I could mentally link this version of how the truth might be determined with a model of questions. Which, for myself, is the point, in that having a model which allows you to determine what you might consider to be truth in your context, is rather useful, even if the precision of that determination is fallible.

The debate began afterwards, ranged from definitions of premise and assumption, disappeared down etymological rabbit holes. I think eventually the white flag was waved and we moved on.

Should testers even talk about truth…?

What do I think of this debate then? I will keep it short and simple, in the light of the potential maze this represents. As a point of order, and in concert with the model presented by Duncan, most of what follows discusses physical truth, namely an artifact/product and what it might do. Social and psychological truths deserve tomes of their own.
When the word ‘truth’ is used in a testing context, I generally think of a few things:


  • Words that we, as testers, shouldn’t use. I would probably put ‘truth’ into a similar bucket as ‘full’, ‘complete’ or ‘done.’ You utter these terms and the ground beneath your feet becomes decidedly shaky. Not one I would put in my safety language locker. Mainly because these terms are literally taken sometimes and truth (to some) seems so darned final.
  • We are in the information business, over the decision making business. “This is what it does” may be more of a tester’s domain over “This is what it should do.” After all, never the twain shall quite meet in beautiful clarity. That is not to say a blend of the two is not something to strive for (preferring early test involvement over at the end, ‘as a service’), but we should be mindful of our core principles.
  • Hang on. Have we not already got an approach for this, as in identifying oracles and being aware of their fallibility? Maybe we’ve already done this question. Nothing wrong with revisiting the Oracle Problem, but I believe that approach remains fundamentally sound, and leaves room for context, whereas truth chases the absolute (doffs cap to John Stevenson here, but I subscribe).




Is this (yet) another impossi-task…?

Speaking of absolute, is truth another techni-coloured dream coat wearing, rainbow generating unicorn with diamonds for eyes that we seem to continuously chase in software development? It sounds suspiciously like that process of nailing down ‘stuff.’ Truth seems to me to be subject to change like all other things, and the more we try and pin the blancmange of truth to the wall, the slipperier the world gets. 

We (in software development, and those whose business depend on it) seem to rather enjoy setting ourselves impossible, contrary goals (deliver this huge thing that the world will still want in two years’ time for example) which directly grind the gears of the world. Maybe this is just one of those. We’ll get over it one day.

Certainly made me think. Truth might just be a journey, and not a destination.

Sunday, 18 October 2015

My First MEWT...

Thanks to Wikimedia Commons for the image
A few months ago I got a very intriguing invite from a certain Richard Bradshaw to contribute to MEWT, an event I had been aware of out of the corner of my eye for a couple of years. The event is held at the beautiful Attenborough Nature Reserve and we delivered our reports within the Media Centre, perched in the middle of the lake. A stunning venue, and a great setting for learning.

As well as being my first MEWT, it was also my first peer conference, where experience reports are presented and then the floor is opened to questions, clarifications and comments. After the floor was opened to determine the running order, we took a vote. I'm not going to lie, I had a hangover, after discussing what the time "half eleven" means to a person from The Netherlands into the relatively small hours the night before. This naturally mean't I would be first up. Of course it did.

So I began to talk through my model for surfacing unrecognised internal models, inspired by a number of coachees who spoke subconsciously of their models, struggled to articulate them, and applied them unwittingly. To be honest, this was quite a nervous time. This model had not seen the light of day outside my brain and that of a few of my coachees. It is very personal, like anything one has created, and to expose to scrutiny can be painful. It was not however. Instead I received thoughtful feedback on potential improvements, also some of my less convincing answers prompted me to re-examine my own thinking on some aspects of the model. Areas of feedback which really interested me:
  • Being careful with goals - goals can drive behaviours, perhaps not in the way you intend.
  • Having a step to revisit goals on an iterative basis is valuable, as the world changes around the coach and coachee.
  • Sharing between coachees - all my coachees are on this path, so why not encourage them to share with each other, giving shared learning opportunities and empathy with the journey of others.
  • To visualise the model in some way, as opposed to the mindmap I had. Coaching ebbs and flows, so I think a means of communicating the model in this manner would be valuable.
MEWT has added the following to my blog post list:
  • Testers talking about truth - inspired by Dunc Nisbet, although I will need to take a week off to investigate, parse and articulate this one!
  • Testers improving themselves/awakening to a more intentional, thinking approach - inspired by Ard Kramer and Geir Gulbrandsen - at what point do we wake up and no longer apply rote models of testing to all problems? I know when I did, I hope to explore this further.
I could do them all as all the ideas presented certainly made me think. Maybe someday, but I'll start with those two. All that remains is to thank everyone. Those who spoke, questioned, organised, facilitated, tweeted, discussed and all the other activities that made MEWT 2015 a massive success.

Also see:

http://www.attenboroughnaturecentre.co.uk/

https://mewtblog.wordpress.com/2015/10/12/mewt-4/

http://www.steveo1967.blogspot.com/2015/10/mewt4-post-1-sigh-its-that-pyramid.html

http://www.associationforsoftwaretesting.org/ - who were hugely gracious in their sponsorship of the venue for the event.


Thanks to John Stevenson for this great photo. Tutu's have been removed to protect the innocent.

Saturday, 3 October 2015

Leeds Testing Community unConference

A few of you will know that we (well, Stephen Mounsey, Nick Judge, Fredrik Seiness, Phil Hargreaves et al did all the hard work, I just flounced in and presented a workshop) have recently given birth to the Leeds Testing Community unConference. All conferences start from an acorn, a twinkle in the eye, and this was no exception.  I didn’t want to let it pass without blogging on it, as I believe it to be the beginning of something big! There is a real thirst for this kind of event in Leeds, a thriving tech city with loads going on.

A quick whistle-stop tour of my highlights:
  • Uno – Laurence Wood presented on his agile heroes, including my close testing and monitoring pal Gwen Diagram. I will not say the ‘D’ word. Also, on one to one ratios of developers to product owners (pinch me), a very strong start from a great speaker.
  • Dos – My mobile testing workshop, entitled ‘Extending your testing senses.’ Despite being the only person currently testing in a mobile context in the room, everyone really got stuck in to using the on device Android Developer Tools. Testing search functions on popular networks using CPU monitoring, layout tools and many more. I heard a lot of heartening ‘I didn’t even know my phone did that’ type of comments. Even better the same tools were used in the afternoon workshop. Joy.
  • Tres – Stephen Mounsey and sketch noting was a great interactive session, I felt a sense of accomplishment and satisfaction as my barely legible doodles became a coherent map of the session. Or something like that. The key learning was turning previously dreaded meetings into something engaging and being present. With a tangible output at the end of it.
  • Cuatro – An honourable mention for my good friend Clem Pickering, whose presentation of the Palchinsky Principles really resonated with me, with strong threads of experimentation and viewing failure as learning. Slide of the day was a surprising number on how Prince Charles and Ozzy Osbourne share a surprising number of characteristics, showing just how much perspective impacts your assumptions while testing.

All that remains is a massive thank you to the organisers, hosts (Callcredit Information Group, who I can testify are an extremely engaging organisation for those in software development) and my fellow speakers. 

Isn’t it exciting to be in at the beginning of something awesome?

Saturday, 19 September 2015

Metastuff


The world is really complex. Layers upon layers of stuff exist in the systems we test. Old stuff on top of new stuff, augmented stuff, rewritten stuff, interconnected stuff, internal stuff talks to external stuff, data from here, there and everywhere. Stuff is done the move, stuff is done from places we never imagined. We are stuffed with stuff, and stuff about stuff. Metastuff.

The Problem with Stuff…

When testing this stuff, it’s pretty easy to get quite full of metastuff quite quickly. Then you get that post-Christmas lunch drowsiness, where you probably should be playing a board game with your family but for some reason you can’t move and are staring at the EastEnders Christmas Special (Phil is leaving Sharon, Beppe is selling E20). Being stuffed with metastuff has left you dull-witted, unable to focus on what matters.

Have I seen this in my fellow testers? Yes. Have I experienced this when testing complex stuff with multi-layered architectures? Oh yes.


Coping…

There is a way to cope though. You can do it. You remember when the Emperor shoots lightning from his fingers in Star Wars? You need to be able to do that. In reverse, where your fingertips are your enhanced and extended senses, and focal points for targeted information, filtering out the stuff. You can cackle like a maniac while you’re doing it if you want.

We need to complement our existing senses. This can be achieved by creating (and then sharpening) our feedback environment while we test.

First Principles…

As is healthy in testing, as it is in life, let’s start with principles:
  •  Find out what matters to your stakeholders. If you have one client who provides 90% of the revenue and 90% of what they do is one specific action. Focus your feedback there. I am saying this for completeness as you already know this right?
  • Complex systems are best understood in layers, receiving feedback from each layer is critical to creating an effective feedback environment.
  • Do not fear asking for systems to return concise information pertinent to testing, timestamps, exception information. This includes the reduction of noise within a logs and auditing tables, so only critical information is presented, not just everything for information's sake. Verbosity can be controlled, your early environments can be more so than your production instances.
  • Distinguish between synchronous and asynchronous parts of a system. Knowledge is power. If you have a system which does some batch operations synchronously, while other asynchronous actions are occurring, you need to test that, but also a way to isolate the feedback from either.
  • Use tools and techniques to make the feedback environment interactive and colourful. We see patterns, it’s one of our strengths, but can be assisted. Make it move and make stuff that is out of the ordinary look out of the ordinary.
  • Knowing how to set your environment to a known state is critical, not just test data either. Being able to actually set your system state to a known state is critical to a feedback environment. Which is clean and as predictable as you can make it.

Specification by Example…

Here comes an example setup for a system with a three tier architecture which I find is a tremendous way to show what I’m talking about. For example, this may be the setup for a testing session:

Establishing a baseline…a clean, (relatively) known system starting point


Setting up to listen while you test…a way to generate continuous feedback



Gathering feedback and generating coherent information…relevant and timely to the session


This shows the visual steps of digging deeper. The arrows denote the level of interrogation. As in find an error in the php logs, discover the Zend Event and following Code Trace.  This depth of information can be further refined by targeting log files with specific criteria based on what you are focusing on by piping log files into grep for example:

Looking out for specific process messages using tail –f /var/log/messages | grep “sftp” or monitoring processes similarly with watch –nd 10 ps aux | grep “httpd.”

The above is a good link into tooling, which is less the focus of this blog, but when used complementary to you own senses and configured intentionally (as in not to swamp one with information and pointing at the focus of the testing session) hugely useful. Personally I think that is a separate blog, and it is probably useful to show how I leverage certain tools (such as Android Studio for example) to enhance my testing senses.

Debugging innit…?

Onwards. This is distinct from debugging in my view as you are providing the where and what of a problem, over the how and why. There is no barrier to how and why of course, depends on your context. This of course depends on a certain level of technical awareness (as in I know the strengths and weaknesses of technology and how to use it to enhance the information I glean) but in a multi-skilled team this should not be beyond a collaborative effort. Further to that, I do believe that a feedback environment such as that above is a team artifact, not only a tester’s domain. A testers bug reports should complement the debugging flow in a given technology or team, so only a few steps are required to begin the diagnosis of the how/why.

Further to this, your feedback environment might look a lot like your monitoring setup which can be applied on a micro scale (for a tester to use for example) or a macro scale (for a system, cluster or federation to use), thus creating a link between common feedback mechanisms for that tricky transition from development to operations. I won’t say the buzzword.

Over to you…

Next time you begin a testing session, ask the question:

Are my stakeholders getting value from what information this system can provide?

Then go a step further and ask:

Is the information the system provides filtered to my focus and therefore meaningful in context?

This will not replace your (un)common sense, but it may help save you from being inundated with a growing tidal wave of stuff. And all the stuff about that stuff.

Wednesday, 9 September 2015

Under Pressure


As a tester, you might recognise this feeling.

The world wants your tests to “pass.” Developers, Product Owners, Project Managers, Executives, everyone is looking at you, waiting for you, wanting the tests to “pass.” Wanting this feature to be delivered, this final piece of the going live puzzle.

Has it “passed” then?

Whatever it is, it hasn’t, it can’t, at the heart of the matter, that isn't how exploration works. There are perceived and non-perceived problems, inconsistencies, misunderstandings, and conflicting stakeholder perceptions. It’s not your judgement to give, however this doesn’t mean that you don’t feel any pressure as a human being.

When you feel it, here are some fun thoughts that you might want to bear in mind:
  • It’s really not your pressure. One of the key lessons I have learnt is not to accept the pressure of others. I’m a real responsibility magnet (there is a whole other blog there). But I make no promises when it comes to testing, I just mine for information. Stop accepting others pressure and watch your testing life transform.
  • Shit ain’t all down to you. It’s not honest, you are special and important but… For example, very early in my career, I had the last blocking bug on a gigantic, multi-multi-multi million pound offshoring project. I was being crushed by ownership of this hideous carbuncle of a bug. Afterwards I realised, it wasn’t mine. The root of that glittering pearl of a problem was way, way back in the mists of the project. Somewhere I had no control of at all. In another organisation. In the past. I had zero influence, literally nothing to be done.
  • It's quite popular to say “I’m giving information about quality to someone who matters.” Don’t even say that. Just say information to someone who matters. We need more distance from quality, not less. Every time you talk about testing and quality, you are creating a false link. It’s not real and confuses those who are already confused about what testing is for. Stop it.
  • Lastly the Zen bit. Nobody wants tests to fail, probably. But this is the rub. They don’t fail. You just learn. It’s true. Once you change from failing to learning, an infinite dimension of satisfaction opens up in your testing. You will find serenity in this thought, even in the most turbulent organisation.

Just remember, as a tester, the moment you allow the words “the test has passed” to pass your lips, turned anything green, given a thumbs up, cheeky grin, however you do it, remember the myth you are perpetuating. That the information gained from exploration is a tester’s responsibility.

It’s everyone’s responsibility. I hear that a lot. Let’s live it.