Skip to main content

Johnny Mnemonic - ICEOVERMAD

Fairly early in my testing journey, I attended (and interacted with) the Rapid Software Testing course, showing me the power of consciously introducing mnemonics into your testing (and life) toolset.

After this initial experience I noted that even in the most linear of environments where the testing process was seemingly restrictive these techniques could be leveraged. On even further reflection I realised we are all using mnemonics to complete certain testing tasks in a subconscious manner and not acknowledging their fallibility.

So 3 years on, having used a number of the mnemonics created by others I thought I would give it a try. I picked something I have worked on a great deal recently, creating test approaches for and executing testing of Application Programming Interfaces (API’s), henceforth referred to as, ‘the service.’

So, without further ado, I can now reveal:

ICE OVER MAD!

Integration – How will consumers integrate with the service? When will consumers integrate with the service? Is it intended to be rendered on a browser? Output into a file and then consumed?

Consumers - Who will be consuming the service? Is the end user a human or a machine? What problem does the service solve for each consumer?

Endpoints – What form does the endpoint take and how is it reached? Is it a single endpoint, multiple endpoints or routed through a load balancer. What level of security is applied?

Operations – What business functions does the service perform? Can they be mapped to current functions performed via a user interface for example? Are the operations descriptively named and readable for both human and machine? Do the operations handle sensitive data?

Volume - Will the service be used at high volume concurrently or sparingly with single high value requests? Are the single transaction sizes an issue? How will API sessions be managed? Is the target architecture clustered?

Error Handling – How will the service handle server side errors? How will the service handle client side errors? Are errors informative and/or verbose? If database connectivity is lost is that handled?

RESTful – does the service have the characteristics of a RESTful service? Is this desirable in context? http://en.wikipedia.org/wiki/Representational_state_transfer

Modularity – How are the components of the service distributed? How do they interact? Can they exist on their own? Can they fail over to one another?

Authentication – how do users authenticate within the service? What permissions are applicable and how does that change the operation of the service? What levels of security are used? Is data sent or received encrypted?

Definitions – What defines the inputs and outputs to the service? Is a WSDL, WADL, XSD, XSLT or Other used? What limits does this impose on the service? Which HTTP methods are used and for what purpose?


While I was creating this, it felt like I could have added a great deal more to the mnemonic but to be effective (and memorable) I have focussed on (what I believe to be) the key areas. So, please feel free to give this a try. Amend, enhance and critique as you see fit.


Comments

  1. Thank you... reached this page from Katrina's API Pathway

    ReplyDelete

Post a Comment

Popular posts from this blog

A Lone Tester at a DevOps Conference

I recently had the chance to go to Velocity Conf in Amsterdam, which one might describe as a DevOps conference. I love going to conferences of all types, restricting the self to discipline specific events is counter intuitive to me, as each discipline involved in building and supporting something isn't isolated. Even if some organisations try and keep it that way, reality barges its way in. Gotta speak to each other some day.

So, I was in an awesome city, anticipating an enlightening few days. Velocity is big. I sometimes forget how big business some conferences are, most testing events I attend are usually in the hundreds of attendees. With big conferences comes the trappings of big business. For my part, I swapped product and testability ideas with Datadog, Pager Duty and others for swag. My going rate for consultancy appears to be tshirts, stickers and hats.

So, lets get to it:

3 Takeaways

Inclusiveness - there was a huge focus on effective teams, organisational dynamics and splitt…

Wheel of Testing Part 2 - Content

Thank you Reddit, while attempting to find pictures of the earths core, you surpass yourself.
Turns out Steve Buscemi is the centre of the world.

Anyway. Lets start with something I hold to be true. My testing career is mine to shape, it has many influences but only one driver. No one will do it for me. Organisations that offer a career (or even a vocation) are offering something that is not theirs to give. Too much of their own needs get in the way, plus morphing into a badass question-asker, assumption-challenger, claim-demolisher and illusion-breaker is a bit terrifying for most organisations. Therefore, I hope the wheel is a tool for possibilities not definitive answers, otherwise it would just be another tool trying to provide a path which is yours to define.


In part one, I discussed why I had thought about the wheel of testing in terms of my own motivations for creating it, plus applying the reasoning of a career in testing to it. As in, coming up with a sensible reflection of real…

The Team Test for Testability

You know what I see quite a lot. Really long-winded test maturity models. 

You know what I love to see? Really fast, meaningful ways to build a picture of your teams current state and provoke a conversation about improvement. The excellent test improvement card game by Huib Schoots and Joep Schuurkes is a great example. I also really like 'The Joel Test' by Joel Spolsky, a number of questions you can answer yes or no to to gain insight into their effectiveness as a software development team.

I thought something like this for testability might an interesting experiment, so here goes:

If you ask the team to change their codebase do they react positively?Does each member of the team have access to the system source control?Does the team know which parts of the codebase are subject to the most change?Does the team collaborate regularly with teams that maintain their dependencies?Does the team have regular contact with the users of the system?Can you set your system into a given state…