Friday, 14 October 2016

Leeds Free, Independent, Punk Testing Atelier

Tuesday the 20th of September 2016 marked the 3rd iteration of the Leeds Free, Independent, Non-Affiliated, and quite frankly pretty darned Punk Testing Atelier.

About 100 attendees, 6 speakers, 4 workshop facilitators, 8 panel guests, 2 panel facilitators and about 10 brave presentation karaoke volunteers. Lots of active involvement if you want it, observation for the passive, without judgement.

There are 7 co-organisers too. We are all different in different ways, gender, nationality, background, with the common factor of being enormous dorks. At the end of the day I paid tribute to our ingenuity, resourcefulness and enthusiasm for creating such a day for all. Every word was meant.

Our beginnings are humble, so remains our intent. We aspire to give voice to those interested and (inevitably) affected by testing, primarily in the thriving technology hub of Leeds. Especially to those who wish to find their voice. They often have the most interesting stories to tell, their first appearance has an extra spark. Often, that rawness is where a great deal of learning can be found.

The day requires you to give two of the most precious things you have, your time and your attention. In recompense, we are very, very light on your pocket. We do welcome sponsorship, for the purpose of enhancing the experience and keeping it free, all sponsors contribute a small amount of money, all are equal regardless of means. Stops anyone getting hot and heavy.

Our hosts are Wharf Chambers. A co-operatively run venue which is regarded as a safe space for the many communities of Leeds. It suits our independent, inclusive mission, feels like home, with its variety of spaces and atmosphere. When I'm there, I really don't miss the sterility of the meeting room or conference hall.

I won't comment too hard about the content and how meaningful it was for the attendees as I can't speak for them. I hopped between sessions, nudged elbows, tapped shoulders and generally cajoled throughout the day. Events as an organiser blur by. But if I had to pick a few top moments (and you are correct I'm biased), I would say:

  • Dave Turner gave an eloquent reminder that test automation is extremely valuable and progressive, but it can be a risky game, and that some testers can be bloody well dangerous when given tooling that 'anyone' can automate tests in. It needs to be a team effort, supported by considerations of technology, product and risk. As one of the most forward-looking developers, managers, coaches and thinkers I know, I believe its feedback to pay attention to.
  • Ritch Partridge and James Sheasby-Thomas on the importance of user experience, design and accessibility in our thinking. I loved the mix of empathy, tools and techniques introduced by these two talks, hopefully our attendees will have had their eyes opened a little more, and a few new questions to ask as a result. 
  • Gwen Diagram, for always and forever showing that testers are first and foremost punks, here to tread all over the status quo, ignore accepted wisdom and be the best they can be. When she spoke, I saw nothing but smiles and wrapt attention. Imagine what testing would be if we were all a little more Gwen Diagram?

I have nothing but admiration for all those involved. In the current inventory of my career, the Atelier is at the moment, by a distance, my biggest source of satisfaction.

Friday, 2 September 2016

What if information isn't enough?

One of my aims for this year has been to attend/talk at what I will class for the purposes of this blog as 'non-testing' events, primarily to speak about what on earth testing is and how we can lampoon the myths and legends around it. It gets some really interesting reactions from professionals within other disciplines.

And usually those reactions (much like this particular blog), leave me with more questions than answers!


After speaking at a recent event, I was asked an interesting question by an attendee. This guy was great, he reinvented himself every few years into a new part of technology, his current focus, machine learning. His previous life, 'Big Data', more on that later. Anyway, he said (something like):

'I enjoyed your talk but I think testing as an information provider doesn't go far enough. If they aren't actionable insights, then what's the point?'

This is why I like 'non-testing' events, someone challenging a tenet than has been left underchallenged in the testing circles I float around in. So, I dug a little deeper asked what was behind that assertion:

'Well, what use is information without insight, the kind you can do something about. Its getting to the point where there is so much information, providing more doesn't cut it.'


On further, further investigation I found he was using the term 'actionable insight' in his previous context within the realm of 'Big Data.' For example, gathering data via Google Analytics on session durations and customer journeys. Lots of information, but without insight, probably of dubious usefulness, without analysis including other axis such as time.

There is an associated model for thinking on the subject of 'actionable insights' namely a pyramid. It is based on the Data Information Knowledge Wisdom Pyramid (7). We love our pyramids, other shaped models for thinking are available apparently. There is the odd cone in the literature too.

I also enjoyed the heuristics of an actionable insights with the Forbes article (3):

If the story of your testing includes elements of the above, it would likely end up quite compelling. It strikes me that an actionable insight is a fundamentally context driven entity, it takes into account the wider picture of the situation while being clear and specific. If my testing can gather insights which satisfy the above, I believe my stakeholders would be very satisfied indeed. Maybe you could argue that you are already producing insights of this calibre but you call it information. Good for you if you are.


What immediately set my testing sensibilities on edge from the conversation and subsequent investigation, was implying that testing would produce insights and imply that actions should be taken (1), which takes us into a grey area. After all what do we 'know' as testers? Only what we have observed, through our specific lenses and biases. The person who questioned me at my talk, believed that was a position of 'comfort but not of usefulness.' More food for thought.  

Moreover, those involved with testing are continuously asked:
'What would you do with this information you have found?' 
I've been asked this more times than I can remember. Maybe it is time that we should be considering 'actionable insights', if this question is going to persist, better chance of a coherent answer. Otherwise the information gleaned from testing might be just another information source drowning in an ever increasing pool of information, fed by a deepening well of data.

Moreoverover, it showed the real value of getting out into the development community, questions that make you question that which you have accepted for a long, long time.



Friday, 29 July 2016

A Personal Model of Testing and Checking

As part of the whole CDT vs Automators vs Team Valor vs Team Mystic battle, one of the main sources of angst appears (to me) to be the testing and checking debate.

The mere mention seems to trigger a Barking Man type reaction in some quarters. Now I enjoy someone barking like a dog as much as the next person but when discussions around testing resemble the slightly grisly scenes in Cujo, we've gone too far. To me, the fallacy at play appears to be "you strongly advocate X therefore you must detest Y." Stands to reason right, I've got two cats I love very much, therefore I cannot stand dogs.

Anyway, I like the testing and checking model. Note the use of the word model. I really mean that, it helps me to think. It helps me to reason about how I am approaching a testing problem and provides a frame, in the form of a distinction. More specifically a distinction which assists my balance.

I've added it to my mental arsenal. As all good testers should do in my eyes with a great many models. Not an absolute, but a guide.

It is in the form of a question, while analysing a testing problem, during testing, or when I'm washing up (sometimes literally) afterwards:

"Now, Ash, how much exploration will you/are you/have you do/doing/have done about the extent to which this here entity solves the problem at hand and how much checking against, say, data being in the place that might be right according to some oracle(s)"

Lets show an example. I'm doing a piece of analysis on a user story, post having a good natter with all the humans involved, for an API written using node.js:

I might have a mission of say:

"To test that product data for home and garden products in the admin data store can be parsed when retrieved and could be consumed by React to be rendered on the mobile website..."

I might generate a couple of charters like:

"Explore the structure of a response from the product api
Using the React properties model oracle
To discover if the data is of the correct type to be consumed by React" 
"Explore the retrieval of specific home and garden products returned from the product api
Using a comparison of the contents of the admin data store as an oracle
To discover if the response data corresponds to the content of the origin"

While valuable, these are probably on my 'checking' spectrum. Therefore I might add:

"Explore the response of home and garden products returned from the product api
Using a variable number of concurrent requestsTo discover the point at which the response time may degrade"

This to me is a bit more 'testy', as I surmise javascript is single threaded, so concurrency may be a problem. If the solution doesn't work, the problem isn't solved. If I get the expected (by some oracle) data back, but if the response time increases by some magnitude when concurrency is introduced, then maybe the problem isn't solved after all. Testing, for a specific technology risk that has a business impact. And so on, I iterate over my charters, with testing and checking in mind.

Do I slave in an exacting fashion to the definitions of testing and checking? Nope. Is it perfectly congruent? Nah. Is it useful to me? Yep.

I could go on but I won't. Its a model, one of many. Be better, select and use models based on their strengths and weaknesses, using your critical mind and experience.


For those who may care, my sticky oar on the debate is as follows:

  • Checking is a tactic of testing, a really important one. Automated or otherwise. Good testing contains checking. Automated testing should be embraced, encouraged and understood, in the spirit of seeing the benefit and harm in all things.
  • I often craft tests, which use high volumes of automated checks to explore behaviours regarding stress, repetition and state. I have found some lovely tooling to facilitate this. I often throw these checks away immediately as there is no perceived (to my stakeholders) value left, similarly with tests. I try to avoid sunk cost where I can.
  • I also really like, "a failed check is an invitation to test." Suggests a symbiosis or extension of our senses, or perhaps even a raised eyebrow. The use of the word invitation is delightful, checking facilitating testing.
  • That said calling something a check or a test doesn't bother me overly. This may be lazy language but on occasions I have seen the word 'check' used to suggest 'unskilled', I consider that lazy language a price worth paying, as opposed to potential alienation. As an applied model of communication, testing and checking is a little dangerous in thoughtless hands.
  • With regard to automation, where appropriate I push checks down the stack as far as possible, but without ravenousness. As checking is a tactic of testing, I select it when appropriate. I apply a mostly return on investment model to this, how much to run, how long, its lifespan versus the entropy of the information it yields.
  • Good testing informs why certain tests (checks) are important, what you test (check) and where, in addition to how you do it and the longevity of those tests (checks). Kind of reads OK either way to me. Which is the point I took away from Exhibit C, and that many people have made eloquently to me a good few times.

Some references that I've consumed and thought about:

Exhibit A:

And perhaps Exhibit B:

And maybe Exhibit C:

And gimme a D:

And E's are good:

Thursday, 30 June 2016

In the Danger Zone

Kenny Loggins said it best.

Last night I stepped right into the 'danger zone.' I attended a roundtable on testing arranged by a recruitment agency, surrounded by big financial services test management and even those representing 'big' consultancy, amongst others. I would not usually attend something like this to be honest, out of my usual bubble.

I have endeavoured this year to talk about testing at a range of events, whether they be non testing specific or as this occasion, an event which is outside of the sphere of my usual haunts. One of my prevailing feelings after a TestBash (for example) is that it was great but for the most part confirmed my world view.

Three questions were posed.
  1. What is the value of a tester?
  2. What are testers accountable for?
  3. What is your opinion on the future of testing?
I thought I would note what my response was for each. Here it is:

Also, for download:

As there were three questions, I'll note my three takeaways from the session:
  1. We, as testers, often still talk about cost and not value. As in 'if this bug would have got through, it would have cost X' rather than 'the team delivered revenue generating feature with a value of Y.' Lets try more positive, team based language.
  2. The question 'should testers be embedded in teams?' was explored. My world has been exactly that for the last four or five years. It was a timely reminder that not all organisations value that arrangement, therefore the appreciation of a testers value by other disciplines is given less opportunity to grow.
  3. Community, specifically that which is external to organisations, is our key to moving testing forward. I note some of the debate recently about automation, while painful for some, is a great example of challenge, clarification and hopefully soon, understanding.
Attend something you might usually not. I believe it's worth it.

Sunday, 22 May 2016

Tester in Development

After I shared this on Twitter a little while ago:

I got to thinking about the 'Developer in Test' pattern which has gained in popularity over the last 15 years of my testing career (but probably existed way before that), so here is my expansion on the point. 

Who in their organisation has one (or many) of these?
  • Technical Tester
  • Developer in Test
  • Software Engineers in Test
  • Software Development Engineers in Test
  • Software Engineers in Test Levels One, Two and Three

* Note 1 - As a sub pattern, it seems that the job title for this pattern has got longer over time, although I am being a little naughty with data. However, a would it be the first time a (sometimes) badly defined job got a lengthy title to give it some credence?
** Note 2 - I have never heard of a Programmer in Test. Let me know if anyone has seen/lived this. It may have the benefit of at least describing the primary activity of the job, although maybe testing should be that primary activity.
*** Note 3 - I would very much like to be known as a 'Tester in Development' from now on. Which would be a ludicrous title right? Suggesting that I am some kind of foreign body, rather than part of how good stuff is built? Although, read in a learning context, we are all testers in development.

I get the feeling there will be a few nods out there. 

I guess we could argue about a few of these titles. However, for me, testing is a technical profession, as technical skills are acquired via learning and practice, specific to that profession or the task at hand. Hell, most people who have a profession have theoretically at least acquired specific technical skills, rendering them technical, with what proficiency is a different debate. Testing, I don't see it as an exception. But, you know, testers need to more technical...

I digress.

Patterns within Patterns 

If you do have one of the specific titles above in your organisation, there may be one or more of the following patterns swirling around in that larger pattern:

Not hiring developers who value testing...

Wrestling with your developers about delivering testable chunks of functionality? Thrown over the wall at the last minute? No unit testing? 500 errors when you load the new site (I'll let you decide if I mean http errors)? Unfortunately, developers who do not value testing are still very much part of the software development world. And if you don't determine their propensity to value testing at hiring time, your life will get very interesting in the next few months. So lets just hire a Developer in Test to bridge the expanding gap, rather than those who view testing as a key part of building things.

Developers don't have time to create tools for testing...

Suppose you get the first bit right and you hire a bunch of developers that value testing. You, as a tester, identify a system which you need to be able to control and observe closely to discover information about risk, maybe even save a bit of time down the way, a little early discovery. But no one has time for that. Sounds like a great idea, but we need to be the first to bust open the Iron Triangle of cost, time and scope to deliver this thing. Let's get a Software Engineer in Test to do it. This might get the job done but it also might point to a deeper sustainable pace problem.

Testers playing to weaker skills over strengths...

Programmers are, for the most part, really good at programming and they practice lots. Testers, while a great many can program, are probably less effective at it, they practice less. While I'm not saying testers shouldn't program, but the power of a joint approach to say, building a check automation framework or a tool to observe the health of a system, creates an opportunity to play to strengths on all sides. I can program, but the developers I work with are better at it than me. I encourage them to do it, and work with them to make meaningful checks and information diviners. Or we can compartmentalise that work into a Technical Tester, reduce the level of engagement and probably subtly miss the point from a programming and testing point of view.

Tool aware but technologically unaware testers...

A developer once said to me:
"Ash, now the testers have got hold of {insert popular service layer testing tool here}, it's like having a rail gun full of XML indiscriminately firing the wrong thing at the wrong endpoints."
Well, I'm all for a bit of randomness in testing, it certainly tipped out a boatload of error handling goodies. However, it points to a wider phenomenon. Testers becoming tool aware but lacking technological awareness. As in, I have a means of invoking this RESTful service but I don't understand the structure of it, its strengths and weaknesses, where I need to probe. So, this type of problem might be farmed to a Software Developer in Test Level 3. At the expense of the learning required by others, which just might help them on the journey of becoming technologically aware.

Local Optimisations

Technical Tester, SDiT, SDEiT Level X. Call it what you want, it's still one of my favourite local optimisations in testing, the system may need to fill that role, but we do so by giving someone a job. Based on the patterns above, we may be missing the bigger picture.

Wednesday, 20 April 2016

By the community, for the community

I was going to write a blog about the various activities we participated in on a fantastic day at the Leeds testing community's second Atelier. Trust me they were great, but a comment I received the day after intruiged me more. An attendee said to me:

After the conference yesterday, I realised that "what are testers?" is a much more interesting question than "what is testing?"

I dug a little, asked for the deeper reasoning:

Well, it's event like yesterday where you see testers are a far more diverse bunch of people than any other field of IT I can think of.

I was knocked back, I believe this was what we wanted to reflect with this event all along.

When I consider, we chose a venue which reflected our values. Wharf Chambers, a workers co-op which is run with fairness and equality at its heart. Every comment received was that the relaxed atmosphere enhanced the event, people happily participated, asked questions. For me, much better than a hotel, auditorium or meeting room.

Then look at the sponsors, Ministry of Testing, I need say no more. The Test People. I was forged there to be honest, it will be with me forever. Callcredit. I was inspired there. Then, a chap called Tony Holroyd withdrew some money from his personal development fund and gave it to us. To me, that is amazing.

Our contributors were of all ages, genders, nationalities, experience levels, backgrounds, disciplines, testers turned entrepreneurs, agile coaches, graduates who had been through the 'testing academy', an increasingly common phenomenon.

I can't thank everyone who was part of the day enough, it's a long list, you know who you are.

Now to rest, then reflect and look to the next Atelier... 

Wednesday, 9 March 2016

Scale Panic

Cast your mind back. 

The last time you were exposed to a new system how did you react? Was it a time of hope and excitement? Or of anxiousness and nagging dread? If it was the latter, you may have been suffering from something I like to call 'scale panic.'

First up, let me define what I mean by this in a testing context:
'When a tester first encounters a new system they need to understand and evaluate, the cognitive load from understanding the parts, interactions and boundaries is too intense. The tester often enters an agitated and slightly disgruntled state of paralysis for some time after scale panic has taken hold.'
The subsequent testing approach and effectiveness can suffer as the appreciation of the big picture means crucial context is often left undiscovered. So, why does this phenomena occur I hear you ask. Well, every situation is different of course but I believe it is centred around a key trait for a tester, awareness. To clarify, let's decompose:

  • People Awareness
    • Testers have many sources of information. Documentation, systems, financial reports, marketing information, the list is long. However, people generally generated this stuff and are worth getting to know. If your oracle gathering strategy doesn't include people, then you are most likely missing out on a crucial aspect of the system puzzle. If the majority of oracles were generated by people (or indeed are people), your questioning and communication skills are even more important than you probably think.
  • Technical Awareness 
    • I really don't want to disappear down the testers being technical rabbit hole. That argument is grey and for me, lacks clarity, and would need a whole other blog/book/mini series. Not what I'm after here. As a tester, do you understand the strengths, weaknesses and quirks of a given technology? What is a relational database good at? What is a document database terrible at? Without these relative basics, the cognitive load of a new system can be too much to bear. Learn the basics of technology and a new system? Big rocks to crack...
  • Depth Awareness
    • Man, I have a detail problem. Do I need to understand every little bit of every little thing before evaluating something? Sometimes. Does that get me into trouble? Oh yeah, but thankfully not like it/less often than it used to. I have learned that detail is useful, but when it is useful is the real kicker. One of the key habits for a tester is to put yourself on a leash. Whenever you feel yourself diving deeper than the situation demands or  deliverance distance from the beaten track, snap back to reality. Personally I use testing sessions with a timed leash, which I enable before going off road.
  • Temporal Awareness
    • Well, if testing is about shattering illusions, here's a biggy for pretty much everyone. Systems change over time. In fact, its pretty much the reason we are all in a job (and sometimes sadly out of a job), so when looking at the big picture of a system, accept that time will pass and it will change. Your understanding is an oracle, which is fallible and subject to entropy, as information erodes over time. This is natural, the key is to accept this cascading fallibility. When overcoming scale panic, accept you are taking a snapshot at a moment in time and question that accordingly as you test. 
  • Existential Awareness
    • When in the grip of scale panic, I see testers making small parts of systems move, which may give the illusion of progress in understanding but feeds an obsession. The obsession of how it works over what it is made up of and why it exists. I assert that how something works gives you a small insight into the scale of a system, but what it is made up of and why it exists give you a wider picture. Understanding the what and the why assists a tester to determine if the problem is solved at the same scale as the system and aligned with its purpose, rather than just what I see works in some way in an unknown context.

I've been thinking about this a lot. Mainly because as (mostly) a career consultant, I encounter new systems all the time. I have built a personal model, expressed above. Next time you encounter a new system, think awareness. Think about the people, technology, depth, time and existential aspects of the system. Then map that, a useful output for everyone, and a challenge to existing assumptions. 

Use it as a guide for your testing, always remembering that it's a model. It's wrong, but it's very useful.