Tuesday, 14 March 2017

Test Approach Mnemonic: MICROBE


With TestBash 2017 on the way, I've been reflecting on my journey as a public speaker, since being inspired by the speaker and overall experience of TestBash Brighton 2013. Looking through some older material I discovered 'Testing is Exploratory', delivered at Leeds Testers Gathering, back in 2014. Extolling the virtue of a lightweight and transparent test approach, inspired by my attendance on Rapid Software Testing the previous year...

Here's what I said then:
What is a 'Test Approach'? 
"It is the act of THINKING about the BUSINESS PROBLEM and how TESTING will contribute VALUE using your own BRAIN and that of OTHERS with the outcome of DELIVERY in mind." 
Test Approach Mnemonic:
  • Mission - the primary concern of the testing, your mum should understand.
  • Involving – involve stakeholders, discover their biases, not your own. 
  • Challenges - what questions are we seeking to explore? 
  • Risk - we are in the risk business, so what are the business risks?
  • Observable - show it rather than write it, pictures and spoken words are engaging.
  • Brief – Concise is the real challenge, means choices are required!
  • Epistemic Humility – Have you been humble with your knowledge? Have you allowed challenge and diversity? 
See, Microbe, reminds you to keep it small, see? 

What about now then? I would amend my definition of a 'Test Approach':
"It is the act of CONTINUOUSLY THINKING as information EMERGES about how TESTING can provide ACTIONABLE INSIGHTS using the collective ORACLES around you to DELIVER VALUE to those who matter."

The key additions are the loud notions of 'CONTINUOUSLY', 'EMERGES' and 'ACTIONABLE INSIGHTS' which are inspired by more experience of testing in an agile context. Having been fortunate enough to work on a number of projects and products where the architecture has been allowed, for the most part, to emerge, an adaptive test approach which sharpens with change has been crucial. Creating a great test approach which provides information that was relevant is waste of a high order. In terms of ACTIONABLE INSIGHTS, given we build systems iteratively, timely information from testing can actually be acted upon near the point of discovery. Make it count!

I would also disambiguate 'your own BRAINS and that of OTHERS' to 'ORACLES', as that implies more than just brains, which is a subset of possible oracles. DELIVER VALUE is key addition. Testing, in my view, suffers from the law of diminishing returns, the more you do, the less value you are likely to get. To believe that one's testing can match the infinite variety of the world, is a false position. Delaying delivery is delaying learning.

The actual mnemonic I would change in a few ways too:

  • Mission - the primary concern of the testing, as in there are many concerns, but a choice must be made. After all, we can't test it all.
  • Involving – involve stakeholders, discover their biases, and your own.
  • Challenges - what are the questions that those who matter wish to be answered? What might I need to fulfil the mission that I don't currently have?
  • Risk - we are in the risk business, so what are the business risks?
  • Observable - if you can't observe what you are testing, how effective is your testing? If I can't observe the entity I am testing (via logs for example) how can I remedy that?
  • Brief – Concise is the real challenge, how does one get a breadth of testing lenses so one can sample the product to find important problems?
  • Epistemic Humility – Have you been humble with your knowledge? Have you allowed challenge and diversity?

I still love to set a mission. I like that it feels like a meaningful negotiation. Where shall we focus? What isn't important? Gets to the heart of the matter early I find.

As for the rest, most of the mnemonic has changed apart from two areas, risk and epistemic humility. I still fundamentally believe testing is risk based, and the information exposed by testing should concern those risks to be of value. Extending that knowing that you don't know everything is a something I'm glad I've retained as my career has progressed. Its even more true now. As a proportion, I know a lot less now than I did then.

Always fun to reflect on what you used to think about testing. Try it!

Original slides here:

https://www.slideshare.net/slideshow/embed_code/key/FVzbIPcQoGWFCA

Tuesday, 31 January 2017

Getting started with testability



At TestBash Netherlands, I said that, in my experience, a lot of testers don't really get testability. I would feel bad if I didn't follow that up with a starting point for expanding your mindset and explicitly thinking about testability day to day, and making your testing lives better! 

In large scale, high transaction systems testability really is critical, as compared to the vastness and variability of the world, testing done within organisations, before deployment, is limited by comparison. We need ways to see and learn from the systems we test where it matters, in Production.

Being able to observe, control and understand is central to testing effectively, and there are loads of resources and experience reports out there to help. I was/am inspired by James Bach, Seth Eliot, Matt Skelton, Sally Goble, Martin Fowler and a little bit of PerfBytes/Logchat, so lets see if it works for you! 

Overall Model:

Heuristics of Software Testability by James Bach

http://www.satisfice.com/tools/testable.pdf

In particular, the intrinsic testability of a system. I firmly believe that a system with high testability has high supportability in this regard. If you could build a system that gave those with operational responsibility with the ability to observe, control and understand what was happening, they would be stoked right? 


Videos:

Seth Eliot introduces testing in production with some compelling examples


Sally Goble then talks about shifting testing away from current models and leveraging production:

Articles:

Perfect Testing 

Feature Toggles

https://martinfowler.com/articles/feature-toggles.html

Rethinking Pipelines



Data Driven Quality

Signal vs Pass/Fail

Ask for Testability - Interesting Idea on Scriptable API's to drive the product

http://www.developsense.com/blog/2014/05/very-short-blog-posts-18-ask-for-testability/

A Map for Testability - Visualising testability in early product conversations

http://www.a-sisyphean-task.com/2014/07/a-map-for-testability.html

Putting Your Testability Socks On - Analysis of the SOCKS testability mnemonic

http://www.a-sisyphean-task.com/2012/07/putting-your-testability-socks-on.html

Audio:





Thursday, 29 December 2016

Testing Thank You's for 2016


I want to close 2016 with some thank you's to people who have helped and influenced me this year:

  • Leeds Testing Atelier and Crew - organising not one but two Ateliers this year has been an absolute blast. The days themselves are great, but I value the company of six other like-minded people, who are dedicated to sharing knowledge and making life a bit better. Independent, not for profit but for joy. It, by some distance, has made 2016 a year to remember. Thank you for the laughs, hangovers and friendship.
  • Ministry of Testing - I had the privilege of delivering a workshop at TestBash Brighton this year. Not much makes me nervous, but delivering my content at such an event certainly did. As an individual achievement, it made me very proud indeed. Then the year got better, I then presented an infrastructure masterclass in October. Thank you for the opportunities and support.
  • Gus and SpeakEasy - I reached out for help from SpeakEasy this year and Augusto Evangelisti kindly volunteered to help me with my speaking career and, well, anything else we wanted to talk about. Gus is thoughtful and forthright, challenges my assumptions, internal models and always reminds me to think about building relationships and listening to customers. Thanks for everything Gus, I will be calling on you in the New Year.
  • Lesley Walkinshaw -  it is so refreshing to work with someone who believes that by building a community within an organisation, everyone's life can be that bit more awesome. Going back into a permanent role wasn't an easy decision, but you made a big difference. Thank you for your knowledge, patience and inclusive nature.
  • Anyone I have coached - I have coached a number of people this year, both in testing and career contexts. I am always captivated by their stories, their capacity to develop themselves and teach me about myself and how I perceive the world around me. Thank you, you know who you are.
Thats not everyone, but there have been so many I picked my highlights. For everyone else, thank you for helping me learn, in success and failure. 

2017 is going to be full of opportunity again. TestBash NL, DEWT, Copenhagen Context, Ateliers, Testers Gathering reboot in Leeds, Testing Showcase North. Oh and writing a book on testability with my partner in all things, Gwen Diagram, who deserves the biggest thanks of all. 


Sunday, 13 November 2016

A Lone Tester at a DevOps Conference


I recently had the chance to go to Velocity Conf in Amsterdam, which one might describe as a DevOps conference. I love going to conferences of all types, restricting the self to discipline specific events is counter intuitive to me, as each discipline involved in building and supporting something isn't isolated. Even if some organisations try and keep it that way, reality barges its way in. Gotta speak to each other some day.

So, I was in an awesome city, anticipating an enlightening few days. Velocity is big. I sometimes forget how big business some conferences are, most testing events I attend are usually in the hundreds of attendees. With big conferences comes the trappings of big business. For my part, I swapped product and testability ideas with Datadog, Pager Duty and others for swag. My going rate for consultancy appears to be tshirts, stickers and hats.

So, lets get to it:

3 Takeaways

  • Inclusiveness - there was a huge focus on effective teams, organisational dynamics and splitting the monolith. The key was going beyond diversity and into the realms of inclusion. As in, an organisations workforce can be as diverse by the numbers, if people aren't included in discussions and decisions, then that organisation is falling short. Still a long way to go, and if diversity is treated as numbers game, we will struggle to realise the benefits of true diversity. Inclusivity is exemplified in the talk from Paula Kennedy from Pivotal and her role there(1). 
  • Web Performance - Velocity has a heavy client side performance focus, which has always interested me, so I tried to catch a number of these talks. There are two main areas which got my testing senses tingling:
    • Progressive Web Apps(2) - specifically service workers(3) and background sync(4). Imagine if there was a process within your web app which behaves like a proxy managing your on and offline experience, downloading content when data is available, for consumption when its not. I can think of a million interesting tests there. Fascinating challenge, with some new tooling too(5). Twitter on Android is a progressive web app. Give it a try.
    • HTTP2 Server Push(6) - along a similar vein, imagine if you could speculatively send resources to client without waiting for a request? What improvements could you make to the initial and next page load time and experience? How much might you push which is not needed though? Or mistimed? How far can you anticipate needs? How would one test this(7)? As with progressive web apps, a testing challenge I would relish.
  • Containerisation - I have had my own development environment for many years now. I've invested time into learning how to build the applications I test locally. The ability to observe and control is vast, the technical awareness gained from the process is contextual gold. The development environment I use now is a number of docker containers to run each of the applications I need. The next step I wish to investigate is cluster management using technology such as Kubernetes(8), with supporting monitoring tooling such as Prometheus(9), perhaps even augmenting my development environment beyond single containers (having multiple webservers and caches for example), as needed to extend my testing as early as possible. 


3 Ponderables

  • Behind the Times - testers (some but not all) can tend to be behind the times. Either worrying about what agile or devops means, or scrabbling around trying to find test approaches and tooling for a new technology which has landed from seemingly nowhere, but in reality has probably been around for a while. We always seem to be busy doing something else. With a broader scope for the events we attend, perhaps we might be in the vanguard one day, rather than the baggage train?
  • Echo Chambers - don't get all offended. I love the testing events I attend. I help to run one. However, the danger of the echo chamber is clear and present. Similar ideas being presented to an agreeable audience is something we should guard against, testing is not a bubble, nor should it be. Velocity provided an excellent reminder that its a big wide world of ideas out there ready to be embraced, testing is part of that.
  • Hanging out with Dev and Ops in my Organisation - one of the coolest parts was that about 20 people from my organisation came to Velocity. We stayed in same hotel, went for dinner, a few drinks and had a good time. More names to faces, already getting involved with infrastructure testing off the back of it. Winning. And I even got a day to have a look around Amsterdam after. 

At the conference there was some surprise that a tester might attend from some other attendees I talked to. One even asked if testers were still a 'thing.' That might tell a separate story. Of course I don't know I was the only person there who identified as a tester, but I'm willing to guess you wouldn't have needed many fingers upon which to count us.

I often hear testers say 'well, that {conference topic} is not part of my role' or 'I don't know what I can contribute.' My answer is wherever you can add value, as testing happens throughout the lifetime of a given system or product. So lets reach out, go somewhere a little different.

References

Main Conference (Slides and Videos)

(1) http://conferences.oreilly.com/velocity/devops-web-performance-eu/public/schedule/proceedings

Progressive Web Apps

(2) https://developers.google.com/web/progressive-web-apps/
(3) https://developers.google.com/web/tools/service-worker-libraries/
(4) https://developers.google.com/web/updates/2015/12/background-sync/
(5) https://developers.google.com/web/tools/lighthouse/

HTTP2 Server Push

(6) https://http2.github.io/faq/
(7) https://canipush.com/

Kubernetes and Prometheus

(8) http://kubernetes.io/docs/whatisk8s/
(9) https://coreos.com/blog/coreos-and-prometheus-improve-cluster-monitoring.html

Friday, 14 October 2016

Leeds Free, Independent, Punk Testing Atelier




Tuesday the 20th of September 2016 marked the 3rd iteration of the Leeds Free, Independent, Non-Affiliated, and quite frankly pretty darned Punk Testing Atelier.

About 100 attendees, 6 speakers, 4 workshop facilitators, 8 panel guests, 2 panel facilitators and about 10 brave presentation karaoke volunteers. Lots of active involvement if you want it, observation for the passive, without judgement.

There are 7 co-organisers too. We are all different in different ways, gender, nationality, background, with the common factor of being enormous dorks. At the end of the day I paid tribute to our ingenuity, resourcefulness and enthusiasm for creating such a day for all. Every word was meant.

Our beginnings are humble, so remains our intent. We aspire to give voice to those interested and (inevitably) affected by testing, primarily in the thriving technology hub of Leeds. Especially to those who wish to find their voice. They often have the most interesting stories to tell, their first appearance has an extra spark. Often, that rawness is where a great deal of learning can be found.



The day requires you to give two of the most precious things you have, your time and your attention. In recompense, we are very, very light on your pocket. We do welcome sponsorship, for the purpose of enhancing the experience and keeping it free, all sponsors contribute a small amount of money, all are equal regardless of means. Stops anyone getting hot and heavy.

Our hosts are Wharf Chambers. A co-operatively run venue which is regarded as a safe space for the many communities of Leeds. It suits our independent, inclusive mission, feels like home, with its variety of spaces and atmosphere. When I'm there, I really don't miss the sterility of the meeting room or conference hall.



I won't comment too hard about the content and how meaningful it was for the attendees as I can't speak for them. I hopped between sessions, nudged elbows, tapped shoulders and generally cajoled throughout the day. Events as an organiser blur by. But if I had to pick a few top moments (and you are correct I'm biased), I would say:

  • Dave Turner gave an eloquent reminder that test automation is extremely valuable and progressive, but it can be a risky game, and that some testers can be bloody well dangerous when given tooling that 'anyone' can automate tests in. It needs to be a team effort, supported by considerations of technology, product and risk. As one of the most forward-looking developers, managers, coaches and thinkers I know, I believe its feedback to pay attention to.
  • Ritch Partridge and James Sheasby-Thomas on the importance of user experience, design and accessibility in our thinking. I loved the mix of empathy, tools and techniques introduced by these two talks, hopefully our attendees will have had their eyes opened a little more, and a few new questions to ask as a result. 
  • Gwen Diagram, for always and forever showing that testers are first and foremost punks, here to tread all over the status quo, ignore accepted wisdom and be the best they can be. When she spoke, I saw nothing but smiles and wrapt attention. Imagine what testing would be if we were all a little more Gwen Diagram?

I have nothing but admiration for all those involved. In the current inventory of my career, the Atelier is at the moment, by a distance, my biggest source of satisfaction.



Friday, 2 September 2016

What if information isn't enough?



One of my aims for this year has been to attend/talk at what I will class for the purposes of this blog as 'non-testing' events, primarily to speak about what on earth testing is and how we can lampoon the myths and legends around it. It gets some really interesting reactions from professionals within other disciplines.

And usually those reactions (much like this particular blog), leave me with more questions than answers!

Huh?

After speaking at a recent event, I was asked an interesting question by an attendee. This guy was great, he reinvented himself every few years into a new part of technology, his current focus, machine learning. His previous life, 'Big Data', more on that later. Anyway, he said (something like):

'I enjoyed your talk but I think testing as an information provider doesn't go far enough. If they aren't actionable insights, then what's the point?'

This is why I like 'non-testing' events, someone challenging a tenet than has been left underchallenged in the testing circles I float around in. So, I dug a little deeper asked what was behind that assertion:

'Well, what use is information without insight, the kind you can do something about. Its getting to the point where there is so much information, providing more doesn't cut it.'

Really?

On further, further investigation I found he was using the term 'actionable insight' in his previous context within the realm of 'Big Data.' For example, gathering data via Google Analytics on session durations and customer journeys. Lots of information, but without insight, probably of dubious usefulness, without analysis including other axis such as time.

There is an associated model for thinking on the subject of 'actionable insights' namely a pyramid. It is based on the Data Information Knowledge Wisdom Pyramid (7). We love our pyramids, other shaped models for thinking are available apparently. There is the odd cone in the literature too.

I also enjoyed the heuristics of an actionable insights with the Forbes article (3):


If the story of your testing includes elements of the above, it would likely end up quite compelling. It strikes me that an actionable insight is a fundamentally context driven entity, it takes into account the wider picture of the situation while being clear and specific. If my testing can gather insights which satisfy the above, I believe my stakeholders would be very satisfied indeed. Maybe you could argue that you are already producing insights of this calibre but you call it information. Good for you if you are.

So?

What immediately set my testing sensibilities on edge from the conversation and subsequent investigation, was implying that testing would produce insights and imply that actions should be taken (1), which takes us into a grey area. After all what do we 'know' as testers? Only what we have observed, through our specific lenses and biases. The person who questioned me at my talk, believed that was a position of 'comfort but not of usefulness.' More food for thought.  

Moreover, those involved with testing are continuously asked:
'What would you do with this information you have found?' 
I've been asked this more times than I can remember. Maybe it is time that we should be considering 'actionable insights', if this question is going to persist, better chance of a coherent answer. Otherwise the information gleaned from testing might be just another information source drowning in an ever increasing pool of information, fed by a deepening well of data.

Moreoverover, it showed the real value of getting out into the development community, questions that make you question that which you have accepted for a long, long time.

Reading

  1. http://whatis.techtarget.com/definition/actionable-intelligence
  2. https://www.techopedia.com/definition/31721/actionable-insight
  3. http://www.forbes.com/sites/brentdykes/2016/04/26/actionable-insights-the-missing-link-between-data-and-business-value/#5293082965bb
  4. https://www.gapminder.org/videos/will-saving-poor-children-lead-to-overpopulation/
  5. http://www.slideshare.net/Medullan/finding-actionable-insights-from-healthcares-big-data
  6. http://www.perfbytes.com/manifesto
  7. https://en.wikipedia.org/wiki/DIKW_Pyramid
  8. http://www.slideshare.net/wealthengineinstitute/building-strategy-using-dataderived-insights-major-gifts/3-Building_Strategy_Using_Data_and

Friday, 29 July 2016

A Personal Model of Testing and Checking


As part of the whole CDT vs Automators vs Team Valor vs Team Mystic battle, one of the main sources of angst appears (to me) to be the testing and checking debate.

The mere mention seems to trigger a Barking Man type reaction in some quarters. Now I enjoy someone barking like a dog as much as the next person but when discussions around testing resemble the slightly grisly scenes in Cujo, we've gone too far. To me, the fallacy at play appears to be "you strongly advocate X therefore you must detest Y." Stands to reason right, I've got two cats I love very much, therefore I cannot stand dogs.

Anyway, I like the testing and checking model. Note the use of the word model. I really mean that, it helps me to think. It helps me to reason about how I am approaching a testing problem and provides a frame, in the form of a distinction. More specifically a distinction which assists my balance.

I've added it to my mental arsenal. As all good testers should do in my eyes with a great many models. Not an absolute, but a guide.

It is in the form of a question, while analysing a testing problem, during testing, or when I'm washing up (sometimes literally) afterwards:

"Now, Ash, how much exploration will you/are you/have you do/doing/have done about the extent to which this here entity solves the problem at hand and how much checking against, say, data being in the place that might be right according to some oracle(s)"

Lets show an example. I'm doing a piece of analysis on a user story, post having a good natter with all the humans involved, for an API written using node.js:

I might have a mission of say:

"To test that product data for home and garden products in the admin data store can be parsed when retrieved and could be consumed by React to be rendered on the mobile website..."

I might generate a couple of charters like:

"Explore the structure of a response from the product api
Using the React properties model oracle
To discover if the data is of the correct type to be consumed by React" 
"Explore the retrieval of specific home and garden products returned from the product api
Using a comparison of the contents of the admin data store as an oracle
To discover if the response data corresponds to the content of the origin"

While valuable, these are probably on my 'checking' spectrum. Therefore I might add:

"Explore the response of home and garden products returned from the product api
Using a variable number of concurrent requestsTo discover the point at which the response time may degrade"

This to me is a bit more 'testy', as I surmise javascript is single threaded, so concurrency may be a problem. If the solution doesn't work, the problem isn't solved. If I get the expected (by some oracle) data back, but if the response time increases by some magnitude when concurrency is introduced, then maybe the problem isn't solved after all. Testing, for a specific technology risk that has a business impact. And so on, I iterate over my charters, with testing and checking in mind.

Do I slave in an exacting fashion to the definitions of testing and checking? Nope. Is it perfectly congruent? Nah. Is it useful to me? Yep.

I could go on but I won't. Its a model, one of many. Be better, select and use models based on their strengths and weaknesses, using your critical mind and experience.


Addendum

For those who may care, my sticky oar on the debate is as follows:


  • Checking is a tactic of testing, a really important one. Automated or otherwise. Good testing contains checking. Automated testing should be embraced, encouraged and understood, in the spirit of seeing the benefit and harm in all things.
  • I often craft tests, which use high volumes of automated checks to explore behaviours regarding stress, repetition and state. I have found some lovely tooling to facilitate this. I often throw these checks away immediately as there is no perceived (to my stakeholders) value left, similarly with tests. I try to avoid sunk cost where I can.
  • I also really like, "a failed check is an invitation to test." Suggests a symbiosis or extension of our senses, or perhaps even a raised eyebrow. The use of the word invitation is delightful, checking facilitating testing.
  • That said calling something a check or a test doesn't bother me overly. This may be lazy language but on occasions I have seen the word 'check' used to suggest 'unskilled', I consider that lazy language a price worth paying, as opposed to potential alienation. As an applied model of communication, testing and checking is a little dangerous in thoughtless hands.
  • With regard to automation, where appropriate I push checks down the stack as far as possible, but without ravenousness. As checking is a tactic of testing, I select it when appropriate. I apply a mostly return on investment model to this, how much to run, how long, its lifespan versus the entropy of the information it yields.
  • Good testing informs why certain tests (checks) are important, what you test (check) and where, in addition to how you do it and the longevity of those tests (checks). Kind of reads OK either way to me. Which is the point I took away from Exhibit C, and that many people have made eloquently to me a good few times.


Some references that I've consumed and thought about:

Exhibit A:

http://www.developsense.com/blog/category/testing-vs-checking

And perhaps Exhibit B:

http://www.satisfice.com/blog/archives/category/testing-vs-checking

And maybe Exhibit C:

http://www.satisfice.com/articles/cdt-automation.pdf

And gimme a D:

http://chrismcmahonsblog.blogspot.co.uk/2016/06/reviewing-context-driven-approach-to.html

And E's are good:

http://www.ministryoftesting.com/2016/04/icky-good-words-software-testing