Metastuff


The world is really complex. Layers upon layers of stuff exist in the systems we test. Old stuff on top of new stuff, augmented stuff, rewritten stuff, interconnected stuff, internal stuff talks to external stuff, data from here, there and everywhere. Stuff is done the move, stuff is done from places we never imagined. We are stuffed with stuff, and stuff about stuff. Metastuff.

The Problem with Stuff…

When testing this stuff, it’s pretty easy to get quite full of metastuff quite quickly. Then you get that post-Christmas lunch drowsiness, where you probably should be playing a board game with your family but for some reason you can’t move and are staring at the EastEnders Christmas Special (Phil is leaving Sharon, Beppe is selling E20). Being stuffed with metastuff has left you dull-witted, unable to focus on what matters.

Have I seen this in my fellow testers? Yes. Have I experienced this when testing complex stuff with multi-layered architectures? Oh yes.


Coping…

There is a way to cope though. You can do it. You remember when the Emperor shoots lightning from his fingers in Star Wars? You need to be able to do that. In reverse, where your fingertips are your enhanced and extended senses, and focal points for targeted information, filtering out the stuff. You can cackle like a maniac while you’re doing it if you want.

We need to complement our existing senses. This can be achieved by creating (and then sharpening) our feedback environment while we test.

First Principles…

As is healthy in testing, as it is in life, let’s start with principles:
  •  Find out what matters to your stakeholders. If you have one client who provides 90% of the revenue and 90% of what they do is one specific action. Focus your feedback there. I am saying this for completeness as you already know this right?
  • Complex systems are best understood in layers, receiving feedback from each layer is critical to creating an effective feedback environment.
  • Do not fear asking for systems to return concise information pertinent to testing, timestamps, exception information. This includes the reduction of noise within a logs and auditing tables, so only critical information is presented, not just everything for information's sake. Verbosity can be controlled, your early environments can be more so than your production instances.
  • Distinguish between synchronous and asynchronous parts of a system. Knowledge is power. If you have a system which does some batch operations synchronously, while other asynchronous actions are occurring, you need to test that, but also a way to isolate the feedback from either.
  • Use tools and techniques to make the feedback environment interactive and colourful. We see patterns, it’s one of our strengths, but can be assisted. Make it move and make stuff that is out of the ordinary look out of the ordinary.
  • Knowing how to set your environment to a known state is critical, not just test data either. Being able to actually set your system state to a known state is critical to a feedback environment. Which is clean and as predictable as you can make it.

Specification by Example…

Here comes an example setup for a system with a three tier architecture which I find is a tremendous way to show what I’m talking about. For example, this may be the setup for a testing session:

Establishing a baseline…a clean, (relatively) known system starting point


Setting up to listen while you test…a way to generate continuous feedback



Gathering feedback and generating coherent information…relevant and timely to the session


This shows the visual steps of digging deeper. The arrows denote the level of interrogation. As in find an error in the php logs, discover the Zend Event and following Code Trace.  This depth of information can be further refined by targeting log files with specific criteria based on what you are focusing on by piping log files into grep for example:

Looking out for specific process messages using tail –f /var/log/messages | grep “sftp” or monitoring processes similarly with watch –nd 10 ps aux | grep “httpd.”

The above is a good link into tooling, which is less the focus of this blog, but when used complementary to you own senses and configured intentionally (as in not to swamp one with information and pointing at the focus of the testing session) hugely useful. Personally I think that is a separate blog, and it is probably useful to show how I leverage certain tools (such as Android Studio for example) to enhance my testing senses.

Debugging innit…?

Onwards. This is distinct from debugging in my view as you are providing the where and what of a problem, over the how and why. There is no barrier to how and why of course, depends on your context. This of course depends on a certain level of technical awareness (as in I know the strengths and weaknesses of technology and how to use it to enhance the information I glean) but in a multi-skilled team this should not be beyond a collaborative effort. Further to that, I do believe that a feedback environment such as that above is a team artifact, not only a tester’s domain. A testers bug reports should complement the debugging flow in a given technology or team, so only a few steps are required to begin the diagnosis of the how/why.

Further to this, your feedback environment might look a lot like your monitoring setup which can be applied on a micro scale (for a tester to use for example) or a macro scale (for a system, cluster or federation to use), thus creating a link between common feedback mechanisms for that tricky transition from development to operations. I won’t say the buzzword.

Over to you…

Next time you begin a testing session, ask the question:

Are my stakeholders getting value from what information this system can provide?

Then go a step further and ask:

Is the information the system provides filtered to my focus and therefore meaningful in context?

This will not replace your (un)common sense, but it may help save you from being inundated with a growing tidal wave of stuff. And all the stuff about that stuff.

Comments