Saturday, 3 October 2015

Leeds Testing Community unConference

A few of you will know that we (well, Stephen Mounsey, Nick Judge, Fredrik Seiness, Phil Hargreaves et al did all the hard work, I just flounced in and presented a workshop) have recently given birth to the Leeds Testing Community unConference. All conferences start from an acorn, a twinkle in the eye, and this was no exception.  I didn’t want to let it pass without blogging on it, as I believe it to be the beginning of something big! There is a real thirst for this kind of event in Leeds, a thriving tech city with loads going on.

A quick whistle-stop tour of my highlights:
  • Uno – Laurence Wood presented on his agile heroes, including my close testing and monitoring pal Gwen Diagram. I will not say the ‘D’ word. Also, on one to one ratios of developers to product owners (pinch me), a very strong start from a great speaker.
  • Dos – My mobile testing workshop, entitled ‘Extending your testing senses.’ Despite being the only person currently testing in a mobile context in the room, everyone really got stuck in to using the on device Android Developer Tools. Testing search functions on popular networks using CPU monitoring, layout tools and many more. I heard a lot of heartening ‘I didn’t even know my phone did that’ type of comments. Even better the same tools were used in the afternoon workshop. Joy.
  • Tres – Stephen Mounsey and sketch noting was a great interactive session, I felt a sense of accomplishment and satisfaction as my barely legible doodles became a coherent map of the session. Or something like that. The key learning was turning previously dreaded meetings into something engaging and being present. With a tangible output at the end of it.
  • Cuatro – An honourable mention for my good friend Clem Pickering, whose presentation of the Palchinsky Principles really resonated with me, with strong threads of experimentation and viewing failure as learning. Slide of the day was a surprising number on how Prince Charles and Ozzy Osbourne share a surprising number of characteristics, showing just how much perspective impacts your assumptions while testing.

All that remains is a massive thank you to the organisers, hosts (Callcredit Information Group, who I can testify are an extremely engaging organisation for those in software development) and my fellow speakers. 

Isn’t it exciting to be in at the beginning of something awesome?

Saturday, 19 September 2015


The world is really complex. Layers upon layers of stuff exist in the systems we test. Old stuff on top of new stuff, augmented stuff, rewritten stuff, interconnected stuff, internal stuff talks to external stuff, data from here, there and everywhere. Stuff is done the move, stuff is done from places we never imagined. We are stuffed with stuff, and stuff about stuff. Metastuff.

The Problem with Stuff…

When testing this stuff, it’s pretty easy to get quite full of metastuff quite quickly. Then you get that post-Christmas lunch drowsiness, where you probably should be playing a board game with your family but for some reason you can’t move and are staring at the EastEnders Christmas Special (Phil is leaving Sharon, Beppe is selling E20). Being stuffed with metastuff has left you dull-witted, unable to focus on what matters.

Have I seen this in my fellow testers? Yes. Have I experienced this when testing complex stuff with multi-layered architectures? Oh yes.


There is a way to cope though. You can do it. You remember when the Emperor shoots lightning from his fingers in Star Wars? You need to be able to do that. In reverse, where your fingertips are your enhanced and extended senses, and focal points for targeted information, filtering out the stuff. You can cackle like a maniac while you’re doing it if you want.

We need to complement our existing senses. This can be achieved by creating (and then sharpening) our feedback environment while we test.

First Principles…

As is healthy in testing, as it is in life, let’s start with principles:
  •  Find out what matters to your stakeholders. If you have one client who provides 90% of the revenue and 90% of what they do is one specific action. Focus your feedback there. I am saying this for completeness as you already know this right?
  • Complex systems are best understood in layers, receiving feedback from each layer is critical to creating an effective feedback environment.
  • Do not fear asking for systems to return concise information pertinent to testing, timestamps, exception information. This includes the reduction of noise within a logs and auditing tables, so only critical information is presented, not just everything for information's sake. Verbosity can be controlled, your early environments can be more so than your production instances.
  • Distinguish between synchronous and asynchronous parts of a system. Knowledge is power. If you have a system which does some batch operations synchronously, while other asynchronous actions are occurring, you need to test that, but also a way to isolate the feedback from either.
  • Use tools and techniques to make the feedback environment interactive and colourful. We see patterns, it’s one of our strengths, but can be assisted. Make it move and make stuff that is out of the ordinary look out of the ordinary.
  • Knowing how to set your environment to a known state is critical, not just test data either. Being able to actually set your system state to a known state is critical to a feedback environment. Which is clean and as predictable as you can make it.

Specification by Example…

Here comes an example setup for a system with a three tier architecture which I find is a tremendous way to show what I’m talking about. For example, this may be the setup for a testing session:

Establishing a baseline…a clean, (relatively) known system starting point

Setting up to listen while you test…a way to generate continuous feedback

Gathering feedback and generating coherent information…relevant and timely to the session

This shows the visual steps of digging deeper. The arrows denote the level of interrogation. As in find an error in the php logs, discover the Zend Event and following Code Trace.  This depth of information can be further refined by targeting log files with specific criteria based on what you are focusing on by piping log files into grep for example:

Looking out for specific process messages using tail –f /var/log/messages | grep “sftp” or monitoring processes similarly with watch –nd 10 ps aux | grep “httpd.”

The above is a good link into tooling, which is less the focus of this blog, but when used complementary to you own senses and configured intentionally (as in not to swamp one with information and pointing at the focus of the testing session) hugely useful. Personally I think that is a separate blog, and it is probably useful to show how I leverage certain tools (such as Android Studio for example) to enhance my testing senses.

Debugging innit…?

Onwards. This is distinct from debugging in my view as you are providing the where and what of a problem, over the how and why. There is no barrier to how and why of course, depends on your context. This of course depends on a certain level of technical awareness (as in I know the strengths and weaknesses of technology and how to use it to enhance the information I glean) but in a multi-skilled team this should not be beyond a collaborative effort. Further to that, I do believe that a feedback environment such as that above is a team artifact, not only a tester’s domain. A testers bug reports should complement the debugging flow in a given technology or team, so only a few steps are required to begin the diagnosis of the how/why.

Further to this, your feedback environment might look a lot like your monitoring setup which can be applied on a micro scale (for a tester to use for example) or a macro scale (for a system, cluster or federation to use), thus creating a link between common feedback mechanisms for that tricky transition from development to operations. I won’t say the buzzword.

Over to you…

Next time you begin a testing session, ask the question:

Are my stakeholders getting value from what information this system can provide?

Then go a step further and ask:

Is the information the system provides filtered to my focus and therefore meaningful in context?

This will not replace your (un)common sense, but it may help save you from being inundated with a growing tidal wave of stuff. And all the stuff about that stuff.

Wednesday, 9 September 2015

Under Pressure

As a tester, you might recognise this feeling.

The world wants your tests to “pass.” Developers, Product Owners, Project Managers, Executives, everyone is looking at you, waiting for you, wanting the tests to “pass.” Wanting this feature to be delivered, this final piece of the going live puzzle.

Has it “passed” then?

Whatever it is, it hasn’t, it can’t, at the heart of the matter, that isn't how exploration works. There are perceived and non-perceived problems, inconsistencies, misunderstandings, and conflicting stakeholder perceptions. It’s not your judgement to give, however this doesn’t mean that you don’t feel any pressure as a human being.

When you feel it, here are some fun thoughts that you might want to bear in mind:
  • It’s really not your pressure. One of the key lessons I have learnt is not to accept the pressure of others. I’m a real responsibility magnet (there is a whole other blog there). But I make no promises when it comes to testing, I just mine for information. Stop accepting others pressure and watch your testing life transform.
  • Shit ain’t all down to you. It’s not honest, you are special and important but… For example, very early in my career, I had the last blocking bug on a gigantic, multi-multi-multi million pound offshoring project. I was being crushed by ownership of this hideous carbuncle of a bug. Afterwards I realised, it wasn’t mine. The root of that glittering pearl of a problem was way, way back in the mists of the project. Somewhere I had no control of at all. In another organisation. In the past. I had zero influence, literally nothing to be done.
  • It's quite popular to say “I’m giving information about quality to someone who matters.” Don’t even say that. Just say information to someone who matters. We need more distance from quality, not less. Every time you talk about testing and quality, you are creating a false link. It’s not real and confuses those who are already confused about what testing is for. Stop it.
  • Lastly the Zen bit. Nobody wants tests to fail, probably. But this is the rub. They don’t fail. You just learn. It’s true. Once you change from failing to learning, an infinite dimension of satisfaction opens up in your testing. You will find serenity in this thought, even in the most turbulent organisation.

Just remember, as a tester, the moment you allow the words “the test has passed” to pass your lips, turned anything green, given a thumbs up, cheeky grin, however you do it, remember the myth you are perpetuating. That the information gained from exploration is a tester’s responsibility.

It’s everyone’s responsibility. I hear that a lot. Let’s live it.

Saturday, 13 June 2015


I’m currently experiencing something I never thought I would.

The technology team I work in isn’t the slightly odd, dysfunctional part of the business, tucked away in the corner, showing signs of madness, gibbering binary nonsense at anyone who strays within range. I believe we are a high functioning team and have gone further, we are reaching out and contributing to improving the wider organisational system. I know, I have fallen off my chair several times.

How are we achieving this state I hear you ask? I won’t provide an illusion of perfection but I think we are doing these things really well:
  • Saying yes, because technology is the art of the possible. Genuinely living the ethos that we can build what you need, share your priorities, help us understand, then we’ll get to it. Go to point 2.
  • Releasing stuff regularly to get feedback. I know this is old hat, but you’d be amazed what it can achieve. Roughly weekly, mostly when it’s valuable, not overloading. Asking if you want more stuff, not here’s more stuff.
  • Helping to determine what the organisation considers valuable over falling down the rabbit hole of what we can provide. The perennial conversation that we’ve all been involved with, I’ll tell you what I want, when you tell me what you can give me. Which usually results in the wrong solution to a poorly understood problem.
  • Providing coaching on the implications of technology. As in technology and what it means for the business proposition, not just how technology has been implemented. Where is it strong or weak? What choices did we make? 
  • Espousing the value of team. That every team should have everything it needs in order to succeed. If the team needs these skills, then let’s make that happen. Also, not just filling gaps. For example, not just hiring testers, but encouraging testers to encourage others to take responsibility in appropriate ways. If you hire more testers, you’ll more than likely get more testing, but less likely a great deal more value.
  • Genuinely hiring for behaviours. I’ve played at this before, but that urge to think ‘commercially’ has meant rushed decisions with months of pain. Making the hiring process equitable (you have to be happy with us, not just us happy with you) is much more pragmatic than the misery for both parties of being a square peg in a round hole.
  • Asking for clarity on the mission. Usually the technology teams are the last to know and/or understand the mission of the organisation. Not us. Rather than trying to hide on the periphery, we are front and centre of the mission, discussing how technology can enable us to realise that mission.
I personally love being a part of all this. I hope we can create an organisation which genuinely lives its values. Something I’ve been seeking for a long time.

Sunday, 17 May 2015

Weekend Testing Europe - Testing Session for LinkedIn

Great time had during Weekend Testing Europe, recommended for all to sharpen up your skills, ask questions, or just get a bit of practice. The exercise in principle was compare features from mobile to desktop on the beautiful LinkedIn app/site...

For those interested my testing session files generated:


Linked In Mobile App

iPhone 5s iOS 8.3

Search Functionality

## No access to search algorithm to verify results


##User needs to authenticate for search function to be available.


##On tapping search and entering no details, contacts appear alphabetically ascending.
##Cancelling returns to home tab and reloads the Newsfeed.
##Options for the user:
Contacts appear alphabetically ascending.
Displays a set of jobs you may be interested in 
? Unknown algorithm for this, would need to be verified ?
Displays option to set location
In a new dialog you can search for a location or use current
Using current prompts for access to location services
Don't allow and the screen goes blank you can only cancel.
App remembers that you have disallowed location use.
Displays the companies you are currently following
? What happens when you follow no companies ?
Displays your currently joined groups
? What happens when you have no groups ?
## Search
Adding search terms filters your own contacts first, then returns those you may know
? Unverified algo here ?
TO CHECK - Searching on a full name of a contact you already have returns those you don't only
Can search with only 1 char -  hopefully this is managed in the search algo
TO CHECK - Cannot search by company, only title
Title appears to be fuzzily matched, without location set you get jobs from all over.
With location set:
Latest jobs are returned first, lots of jobs in Leeds.
Location can be removed, returns to previous results.
Job title and location can be used in conjunction. 


LinkedIn Desktop

Windows 8.1, Chrome 42

Search Functionality

Again, no understanding of the search algo
Should be aware that the aim is different here than the mobile experience, search for power and results rather than speed.

User does not need to be authenticated to search
Minor functionality is available, searching by first name and last name
The search is a true 'elastic search' rather than the linear, categorised search.

Entering text exposes:
Showcased Pages
Can search with 1 char - 18 results are filtered each time.
The 18 results are shared between the above categories
The search dynamically filters as you type.
Searching on a full name reliabily returns connected and unconnected users  (mobile app only unconnected users)
Company search defaults three options
Those who work there
Those who no longer work there
Typing a search terms and the clicking search opens a new page with advanced search options



-Desktop was a true elastic search, built for power with a dependable connection, mobile much more category dependant and about returning immediate results
-Desktop was much more powerful, but both filtered and gave suggestions as you entered data. Mobile good for filtering what you already have, desktop better for finding new stuff.


If you haven't already, get yourselves signed up for next time:

Tuesday, 5 May 2015

Just give me a ball park? Yankee Stadium.

Two of my favourite estimation conversations (roles are indicative, not pointing fingers).

The What Is It?

Project Manager: "How long will it take you to test our disaster recovery solution?"
Me: "What's your disaster recovery solution?"
Project Manager: "We don't have one yet, but we need to test it."
Me: "I'm not convinced that is a valid approach."
Project Manager: "Well, what shall we do then?"
Me: "Create an disaster recovery solution."
Project Manager: "Can you do that?"
Me: "Yes."
Project Manager: "How long will that take to test?"
Me: "I don't know, I would be creating it, so I can't test my own work"

The Anything But...

Product Person: "Can you provide a forecast for how long these would take to implement?"
Me: "Are you asking for an estimate?"
Product Person: "Lets call it a gut feeling then. In days"
Me: "Is that different?" 
Product Person: "OK, I'll settle for a quote."
Me: "We may be straying into semantics here."
Product Person: "Just try and gauge it."
Me: "You know humans are rubbish at estimating time right?"
Product Person: "I know but I'm just asking for a projection."
Me: *sighs audibly*

As a heuristic, if this rings true:

Time spent generating estimates > Time spent doing the work

Then you should seriously have a think about estimating how long it takes to come up with an estimate. Mind the magical development unicorns sprinkling star dust on your product while you do. Or talk about alternatives to estimates as a means to gather information, try what success looks like, or a definition of ready. Break the cycle.

Monday, 4 May 2015

Bad Work

Careers often hinge on shifts in mind-set and I feel as if I have gone beyond a turning point in how I regard my career. In fact I think it is the first time I have taken true ownership of my direction and values, instead of inheriting and adhering to those of another entity, namely an organisation. This realisation concerns not doing what I would call ‘bad work’ (anymore).

Before I go on, let’s specify what I believe to be bad work:
  • Making a promise to stakeholders about delivery you know to a high probability you can’t keep.
  • Not flagging up information that may impact the decision a stakeholder may make.
  • Knowingly doing valueless work, as ordered by a stakeholder.
  • Implementing an exploitative strategy which preys on the ignorance of a stakeholder.
  • Treat someone inhumanely by hiring (or training) them to effect a particular change, then telling them how to do it.

The journey began in March 2013, I listened to a chap named Huib Schoots speak at TestBash 2013, in Brighton. He spoke of refusing to do bad work. I was in awe of the concept of someone taking ownership of their own work in such a way. Also my existing programmed behaviours railed against the concept, internally believing that this was idealistic and didn't translate to the ‘real world.’

However, my awareness grew and grew. I saw others exhibiting the behaviours I saw above, also caught myself on that path on occasion. I spent a significant portion of my time on a project solving a problem that no one could define and indeed no one had complained about. Eventually we delivered a system which would secure the future of that product for the foreseeable future, however, my discomfort was sharp throughout this time. Had we solved the problem? Maybe, maybe not. Had I sat in relative silence or at least acceptance of this fact? I had indeed.

Now, I’m a fairly generous chap, the propensity to do bad work exists on a continuum of consciousness, where people do it unwittingly (“I've been testing this for ages and have developed inattentional blindness to that problem”), ignorantly (“Yes, this will be tested for all possible scenarios”) or knowingly (“I have some information you need to make decision, but it suits me to retain that information”). We've all existed on this continuum, although hopefully, like myself, on the ‘honest fool’ end of that scale.

As I ventured into the world as a consultant on more strategic engagements I noted instances of the above behaviours with increased regularity. These were often disguised as pragmatic steps, picking off ‘low hanging fruit’, sometimes more blatant than that, aiming to be indispensable, rather than giving stakeholders the tools needed to tackle their problems. I favoured the latter, which often brought me into conflict with stakeholders on all sides of the divide.

So, I decided to follow my values and do something else, with a group of people I respect on a product with grand and hopefully (at least partially) noble ends. One thing I do know is that my career compass is pointing in a different direction now and I feel strong enough to follow it.

As well as my own experience, this was inspired by the following articles: