Saturday 16 November 2013

Tell me quick

I've been thinking a lot about what stops change in test practice. One of the areas I think we encounter the most resistance is altering the type of reporting used by management to monitor testing.

There's a lot of talk in the testing community about our role being provision of information about software. Most agree that metrics are not a good means of delivering information, yet management seem to feel they have to report upwards with percentages and graphs. Why do they feel this way?

When testers start richer reporting, managers then have to make time to think about and engage with the information. By contrast, numbers are easy to draw quick conclusions from, be they correct conclusions or not. It doesn't take long to scan through a set of statistics then forward the report on.

I have recently switched to a role with dual focus in delivery and people management. I've been surprised by how many additional demands there are on my time. Where a task requires me to flip in to System 2 thinking, a slower and more deliberate contemplation, that task gets parked until I know I have a period of time to focus on it.

When I don't get that period of time, these types of task build up. I end up with emails from days ago that await my attention. They sit in my inbox, quietly mocking me. I don't enjoy it; it makes me feel uncomfortable and somewhat guilty.

Imagine those feelings being associated with a new test practice.

In my local community there is currently a focus on using mind mapping software to create visual test reporting (shout out to Aaron Hodder). Having used this approach in my delivery of testing, I find it a fantastic way to show coverage against my model and I feel a level of freedom in how I think about a problem.

For managers involved in delivery a mind map allows for fast and instinctive assessment on the progress and status of testing (System 1 Thinking). But for people not involved in the project day-to-day it is not that easy.

Every tester will use mind mapping software differently. The structure of their thinking will differ to yours. The layout won't be as you expect. The way they represent success, failure and blockers will vary. Further, if you haven't seen a mind map evolve and you don't know the domain, it's going to be a challenge to interpret. You'll need to think about it properly, and so the report gets filed in that System 2 pile of guilt.

I don't want to report misleading metrics, but I don't think we've found the right alternative yet. I don't want to change the new way that we are working, but one reason it is failing is that we don't have a strong System 1 reporting mechanism to external stakeholders.

To make change stick we need to make it accessible to others. We must be able to deliver test information in a way that it can be processed and understood quickly. How?

I'm thinking about it, I'd love your help.

2 comments:

  1. Tough one this Katrina, and one that really comes down to the situation - there almost certainly isn't one solution to this.

    The way I see it, with my "ideal world goggles" on, is that it's not a purely testing problem. In an ideal world, it would be the managers et al who demand such speedy, bite size knowledge who would learn to adapt. Would learn to engage more with the information provided.

    As you mention, the role of testing is as information provider, and part of the reason for that (rather than being a quality assurer for example, is that 'someone else' is in a better position than the tester to interpret the information garnered and make an informed decision.

    Usually, this 'someone else' would be the managers et al you're speaking of because they have the view of a product/project as a whole from the business' perspective. As such, a refusal to reduce down the information hard won through testing into a summarised (and thus, less informative) view would actually force them to make better, more informed decisions.

    Returning to reality though, I realise as well as you do that this won't happen anytime soon.

    However, I still think the way we report ought to be to attempt to nudge and nurdle our managerial friends towards the aforementioned ideal.

    To do this, the visual test coverage models mentioned in your article are an excellent starting point because it presents the "findings" alongside the model that led to them. However, if done right, these visual models can still present a summary or statistical view - if that's what they REALLY want.

    Instead of spoon-feeding them meaningless numbers though, you could present this and have the manager count their own numbers. How many sessions (represented on the model) are green/red? How many branches are ticked/crossed? Already they're engaging more.

    If your managers can't or won't count themselves, you could still provide the figures they so crave presented alongside the same visual model. They may not look at it - but they just might. And if they do, they might find that information useful.

    So when it comes to reporting - I think its about finding ways to provide the information THEY want, in the formant YOU think is best. If you can give them both, you may eventually arrive at a promised land of not having to provide the out-of-context numbers.

    Sorry for the rant...!

    ReplyDelete
  2. I remember you talking about delivering meaningful info when you were up in Auckland a few months ago. Vanity metrics really are a waste of time.

    With that said, if test reporting were up to me I'd probably try this approach:
    a) We're confident XXXX works (tested thoroughly and passed)
    b) YYY is loooking to have problems (list the issues with this area of functionality)
    c) Give statistics on how many users XXXX and YYY impact and say if you want more details - let us know and we can break it down for you.

    To be honest, I've always been a fan of saying things in a short and sweet way - there's a bit of a risk losing people otherwise.

    ReplyDelete