Tuesday, 15 August 2017

Encouraging testers to share testing

In theory, agile development teams will work together with a cross-functional, collaborative approach to allow work to flow smoothly. In practice, I see many teams with a delivery output that is limited by the amount of work that their testers can get through.

If testers are the only people who test, they can throttle their team. This can happen because the developers and business people who are part of the team are unwilling to perform testing activities. It can also be the tester who is unwilling to allow other people to be involved in their work. I have experienced both.

There is a lot of material to support non-testers in test activities. I feel that there is less to support the tester so that they feel happy to allow others to help them. I'd like to explore three things that could prevent a tester from engaging other people to help with test activities.

Trust

Do you trust your team? I hope that most of you will work in a team where your instinct is to answer that question with 'yes'. What if I were to ask, do you trust your team to test?

In the past, I have been reluctant to hand testing activities to my colleagues for fear that they will miss something that I would have caught. I worried that they would let bugs escape. I trusted them in general, but not specifically with testing.

On reflection, I think my doubt centered on their mindset.

In exploratory testing, often a non-tester will take a confirmatory approach. This means that if the product behaves correctly against the requirements, it passes testing. It is easy for anyone, regardless of their testing experience, to fall into a habit of checking off criteria rather than interrogating the product for the ways in which it might fail.

In test automation, it is usually the developer who has the skill to assist. My observation is that developers will generally write elegant test automation. They can also fall into the trap of approaching the task from a confirmatory perspective. Automation often offers an opportunity to quickly try a variety of inputs, but developers don't always look at it from this perspective.

If you share these doubts, or have others that prevent you from trusting your team to test, how could you change your approach to allow others to help you?

One thing that I have done, and seen others do, is to have short handovers at either end of a testing task. If a non-tester is going to pick up a test activity, they first spend ten minutes with the tester to understand the test plan and scope. When the non-tester feels that they have finished testing, they spend a final ten minutes with the tester to share their coverage and results.

These short handovers can start to build trust as they pass the testing mindset to other people in the team. As time passes, the tester should find that their input in these exchanges decreases to a point where the handovers become almost unnecessary.

Identity

If your role in the team is a tester, this identity can be tightly coupled to test activities. What will you do if you don't test? If other people can test, then why are you there at all?

I particularly struggled with this as a consultant. I would be placed into an agile development team as a tester, but often the most value that I could deliver would be in encouraging other people to test. This felt a bit like cheating the system by getting other people to do my role. I believe that a lot of people write about the evolving tester role because of this dissonance.

The clearest way that I have to challenge the tester identity in a constructive way is the concept of elastic role boundaries that I co-developed with Chris Priest. This concept highlights the difference between tasks and enduring commitments. We can be flexible in taking ownership of small activities while still retaining a specialist identity in a particular area of the team.

In simpler terms, a colleague helping with a single testing activity does not make them a tester. I do not see a threat to our role in sharing test activities. I believe that a tester retains their identity and standing in the team as a specialist even when they share testing work.

Vulnerability

The third barrier that I see is that testers are unwilling to ask for help. This isn't an attribute that is unique to testers, many people are unwilling to ask for help. In an agile team, failing to pull your colleagues to help with testing may limit your ability to deliver.

Even when you think that it is clear that you need help, don't assume that your colleagues understand when you are under pressure. In my experience, people are often blissfully unaware of the workload of others in the team. Even when the day begins with a stand-up meeting, it can be difficult to determine how much work sits behind each task that a person has in progress. Make it clear that you would welcome assistance.

You may be reluctant to ask for help because you see it as a sign of weakness. It might feel like everyone else in the team can maintain a certain pace while you are always the person who needs a hand. In my experience, asking for help is rarely perceived as weakness by others. More often, I have seen teams respond with praise for bravery, eagerness to alleviate stress, and relief that they have been given permission to help.

It can also be difficult to share testing when you work in a wider structure alongside other agile teams that have the same composition as your team. You might believe that your team is the only one where the testers cannot handle testing themselves. In this situation, remember the ratio myth. There are a lot of variables in determining the ratio of testers to the rest of the team. Sometimes a little bit of development can spin off a lot of testing, or vice versa. Test your assumptions about how other teams are operating.

If you are vulnerable, you encourage others to behave in the same way. A tester sharing testing can encourage others to seek the help that they need too. If you're unwilling to ask someone for help directly, start by making it clear to your team what your workload is and extend a general invitation for people to volunteer to assist.

***

If you work in an agile development team and feel reluctant to share test activities with your colleagues, you might be creating unnecessary pressure on both yourself and your team by limiting the pace of delivery. I'd encourage you to reflect on what is preventing you from inviting assistance.

If you doubt that your colleagues will perform testing to your standard, try a handover. If you worry about the necessity of your role if you share testing, perhaps elastic role boundaries will explain how specialists can share day-to-day tasks and retain their own discipline. If you are reluctant to ask for help directly, start by making your workload clear so that your team have the opportunity to offer.

I encourage you to reflect on these themes and how they influence the way that you work, to get more testers to share testing.

Sunday, 23 July 2017

Exploring the top of the testing pyramid: End-to-end and user interface testing

A few weeks ago I found myself in a circular conversation about end-to-end testing. I felt like I was at odds with my colleague, but couldn't figure out why. Eventually we realised that we each had a different understanding of what end-to-end meant in the testing that we were discussing.

That conversation lead to this poll on Twitter:


The poll results show that roughly a quarter of respondents considered end-to-end testing to be primarily about the infrastructure stack, while the remaining majority considered it from the perspective of their customer workflow. Odds are that this ratio means I'm not the only person who has experienced a confusing conversation about end-to-end tests.

I started to think about the complexity that is hidden at the top of the testing pyramid. The model states that the smallest number of tests to automate are those at the peak, labelled as end-to-end (E2E) or user interface (UI) tests.

Testing Pyramid by Mike Cohn

These labels are used in the test pyramid interchangeably, but end-to-end and user interface testing are not synonymous. I can think of seven different types of automation that might be labelled by one or either of those two terms:

Seven example of user interface and/or end-to-end testing

The table above might be confusing without examples, so here are a few from my own experience.

1. User interface; Not full infrastructure stack; Not entire workflow

In my current organisation we have a single page JavaScript application that allows the user to perform complex interactions through modal overlays. We run targeted tests against this application, through the browser, using static responses in place of our real middleware and mainframe systems. 

This suite is not end-to-end in any sense. We focus on the front-end of our infrastructure and test pieces of our customer workflows. We could call this suite user interface testing.

2. User interface; Full infrastructure stack; Not entire workflow

I previously worked with an application that was consuming data feeds from a third party provider. We were writing a user interface that displayed the results of calculations that relied on these feeds as input. We had a suite of tests that ran through the user interface to test each result in isolation.

Multiple calculations made up a workflow, so the tests did not cover an entire customer journey. However they did rely on the third-party feed being available to return test data to our application, so they were end-to-end from an infrastructure perspective. In this team we used the terms user interface tests and end-to-end tests interchangeably when talking about this suite.

3. User interface; Not full infrastructure stack; Entire workflow

In my current organisation we have an online banking application written largely in Java. Different steps of a workflow, such as making a payment, each display separately on a dedicated page. We have a suite of tests that run through the common workflows to test that everything is connected correctly.

This suite executes tests through the browser against the deployed web application, but uses mocked responses instead of calling the back-end systems. It is a suite that targets workflows. We could call this a user interface or end-to-end suite.

4. User interface; Full infrastructure stack; Entire workflow

In the same product as the first example, there is another suite of automation that runs through the user interface against the full infrastructure stack to test the entire customer workflow. We interact with the application using test data that is provisioned across all the dependent systems and the tests perform all the steps in a customer journey. We could call this suite user interface or end-to-end testing.

5. No user interface; Full infrastructure stack; Not entire workflow

In my current organisation we have a team working on development of open APIs. There is no user interface for the API as a product. The team have a test suite that interacts with their APIs and relies on the supporting systems to be active: databases, middleware, mainframe, and dependencies to other APIs.

These tests are end-to-end from an infrastructure perspective, but their test scope is narrow. They interrogate successful and failing responses for individual requests, rather than looking at the sequence of activities that would be performed by a customer.

6. No user interface; Not full infrastructure stack; Entire workflow

Earlier in my career I worked in telecommunications. I used to install, configure, and test the call control and charging software within cellphone networks. We had an in-house test tool that we could use to trigger scripted traffic through our software. This meant that we could bypass the radio network and use scripts to test that a call would be processed correctly, from when a person dialed the number to when they ended the call, without needing to use a mobile device.

These automated tests were end-to-end tests from a workflow perspective, even though there was no user interface and we weren't using the entire network infrastructure.

7. No user interface, Full infrastructure stack; Entire workflow

The API team in the fifth example have a second automated suite where a single test performs multiple API requests in sequence, passing data between calls to emulate a customer workflow. These tests are end-to-end from both the infrastructure and the customer experience perspective.


As these examples illustrate, end-to-end and user interface testing can mean different things depending on the product under test and the test strategy adopted by a team. If you work in an organisation where you label your test automation with one of these terms, it may be worth checking that there is truly a shared understanding of what your tests are doing. Different perspectives of test coverage can create opportunities for bugs to be missed.

Wednesday, 12 July 2017

Test Automation Canvas

Test automation frameworks grow incrementally, which means that their design and structure can change over time. As testers learn more about the product that they are testing and improve their automation skills, this can reflect in their code.

Recently I've been working with a group of eight testers who belong to four different agile teams that are all working on the same set of products. Though the testers regularly meet to share ideas, their test automation code had started to diverge. The individual testers had mostly been learning independently.

A manager from the team saw these differences emerging and felt concerned that the automated test coverage was becoming inconsistent between the four teams. The differences they saw in testing made them question whether there were differences in the quality of delivery. They asked me to determine a common approach to automated test coverage by running a one hour workshop.

I am external to the team and have limited understanding of their context. I did not want to change or challenge the existing approach or ideas from this position, particularly given the technical skills that I could see demonstrated by the testers themselves. I suspected that there were good reasons for what they were doing, but perhaps not enough communication.

I decided that a first step would be to create an activity that would get the testers talking to each other, gather information from these conversations, then summarise the results to share with the wider team.

To do this, I thought a bit about the attributes of a test automation framework. The primary reason that I had been engaged was to discuss test coverage. But coverage is a response to risk and constraints, so I wanted to know what those were too. I was curious about the mechanics of the suites: dependencies, test data, source control, and continuous integration. I had also heard varying reports about who was writing and reviewing automation in each team, so I wanted to talk about engagement and maintenance of code.

I settled on a list of nine key areas:

  1. RISKS - What potential problems does this suite mitigate? Why does it exist?
  2. COVERAGE - What does this suite do?
  3. CONSTRAINTS - What has prevented us from implementing this suite in an ideal way? What are our known workarounds?
  4. DEPENDENCIES - What systems or tools have to be functional for this suite to run successfully?
  5. DATA - Do we mock, query, or inject? How is test data managed?
  6. VERSIONING - Is there source control? What is the branching model for this suite?
  7. EXECUTION - Is the suite part of a pipeline? How often does it run? How long does it take? Is it stable?
  8. ENGAGEMENT - Who created the suite? Who contributes to it now? Who is not involved, but should be?
  9. MAINTAINABILITY - What is the code review process? What documentation exists?

I decided to put these prompts into an A3 canvas format, similar to a lean canvas or an opportunity canvas. I thought that this format would create a balance between conversation and written record, as I wanted both to happen simultaneously.

Here is the blank Test Automation Canvas that I created:

A blank Test Automation Canvas

On the day of the workshop, the eight testers identified four separate automation suites under active development. They then self-selected into pairs, with each pair taking a blank canvas to complete.

It took approximately 20 minutes to discuss and record the information in the canvas. I asked them to complete the nine sections in the order that they are numbered in the earlier list: risks, coverage, constraints, dependencies, data, versioning, execution, engagement, and maintainability.

Examples of completed Test Automation Canvas

Then I asked the pairs to stick their completed canvas on the wall. We spent five minutes circling the room, silently reading the information that each pair had provided. As everyone had been thinking deeply about one specific area, this time was to switch to thinking broadly.

In the last 15 minutes, we finished by visiting each canvas in turn as a group. I asked two questions at each canvas to prompt group discussion: is anything unclear and is anything missing. This raised a few new ideas, and some misunderstanding between different teams, so notes were added into the canvas'.

After the workshop, I took the information from the canvas' to create a single A3 summary of all four automation frameworks, plus the exploratory testing that is performed using a separate tool:

Example of Test Automation Summary

In the image above, each row is a different framework. The columns are rationale, coverage, dependencies, mechanics, and improvement opportunities. Within mechanics are versioning, review, pipeline, contributors and data.

I shared this summary image in a group chat channel for the testers to give their feedback. This led to a number of small revisions and uncovered one final misunderstanding. Now I think that we have a reference point that clearly states the collective understanding of test automation among the testers. The next step is to share this information with the wider team.

I hope that having this information recorded in a simple way will create a consistent basis for future iterations of the frameworks. If the testers respect the underlying rationale of the suite and satisfy the high-level coverage categories, then slight differences in technical implementation are less likely to create the perception that there is a problem.

The summary should also support testers to give feedback in their code reviews. I hope that it provides a reference to aid constructive criticism of code that does not adhere to the statements that have been agreed. This should help keep the different teams on a similar path.

Finally, I hope that the summary improves visibility of the test automation frameworks for the developers, business people, and managers who work in these teams. I believe that the testers are doing some amazing work and hope that this reference will promote their efforts.

Friday, 23 June 2017

The Interview Roadshow

Recently I have been part of a recruitment effort for multiple roles. In May we posted two advertisements to the market: automation tester and infrastructure tester. Behind the scenes we had nine vacancies to fill.

This was the first time that I had been involved in recruiting for such a large number of positions simultaneously. Fortunately I was working alongside a very talented person in our recruitment team, Trish Burgess, who had ideas about how to scale our approach.

Our recruitment process for testers usually includes five steps from a candidate perspective:
  1. Application with CV and cover letter
  2. Screening questions
  3. Behavioural interview
  4. Practical interview
  5. Offer
We left the start of the process untouched. There were just over 150 applications for the two advertisements that we posted and, after reading through the information provided, we sent three screening questions to a group of 50 candidates. We asked for responses to these questions by a deadline, at which point we selected who to interview.

Usually we would run the two interviews separately. Each candidate would be requested to attend a behavioural interview first then, depending on the feedback from that, a practical interview as a second step. Scheduling for the interviews would be agreed between the recruiter, the interviewers, and the candidates on a case-by-case basis.

As we were looking to fill nine vacancies, we knew that this approach wouldn't scale to the number of people that we wanted to meet. We decided to trial a different approach.

The Interview Roadshow

Trish proposed that we run six parallel interview streams. To achieve this we would need twelve interviewers available at the same time - six behavioural and six practical - to conduct the interviews in pairs.

The first hurdle was that we didn't have six people who were trained to run our practical interview, as we usually ran them one-by-one. I asked for volunteers to join our interview panel and was fortunate to have a number of testers come forward. I selected a panel of eight where four experienced interviewers were paired with four new interviewers. The extra pair gave us cover in case of unexpected absence or last minute conflicts.

We assembled a larger behavioural interview panel too, which gave us a group of 16 interviewers in total. Several weeks in advance of the interview dates, while the advertisements were still live, Trish booked three half-day placeholder appointments into all their diaries:
  • Friday morning 9.30am - 12pm
  • Monday afternoon 1pm - 3.30pm
  • Wednesday morning 9.30am - 12pm

In the weeks leading up to the interviews themselves, the practical interviewer pairs conducted practice interviews with existing staff as a training exercise for the new interviewers. We also ran a session with all the behavioural interviewers to make sure that there was a consistent understanding of the purpose of the interview and that our questions were aligned.

From the screening responses I selected 18 people to interview. We decided to allocate the candidates by their experience into junior, intermediate, and senior streams, then look to run a consistent interview panel for each group. This meant that the same people met all of the junior candidates, and similarly at other levels.

The easiest way to illustrate the scheduling is through an example.

For the first session on Friday morning we asked the candidates to arrive slightly before 9.30am. Trish and I met them in the lobby, then took them to a shared space in our office to give them a short explanation of how we were running the interviews. I also took a photo of each candidate, which I used later in the process when collating feedback.

Then we delivered the candidates to their interviewers. We gave the interviewers a few minutes together prior to the candidate arriving, for any last minute preparation, so the interviews formally began ten minutes after the start of their appointment (at 9.40am).

Here is a fictitious example schedule for the first set of interviews:



The first interviews finished by 10.40am, at which point the interviewers delivered the candidate back to the shared space. We provided morning tea and they had 20 minutes to relax prior to their next interview at 11am. Trish and I were present through the break and delivered the candidates back to the interviewers.

Here is a fictitious example schedule for the second interviews:



The second interview session finished by 12pm, at which point the interviewers would farewell the candidate and collate their feedback from both sessions.

Retrospective Outcomes

The main benefit to people involved in the interview roadshow was that it happened within a relatively short time frame. Within four working days we conducted 36 interviews. As a candidate, this meant fast feedback on the outcome. As an interviewer, it meant less disruption of my day-to-day work.

We were happily surprised that we had 18 candidates accept the interview offer immediately. We had assumed that some people would be unavailable, as when we schedule individual interviews there is a lot of back-and-forth. Trish had given an indication of the interview schedule when asking the screening questions. The set times seemed to motivate candidates make arrangements so that they could attend.

By running two interviews in succession, the candidate only had to visit our organisation once. In our usual process recruitment process a candidate might visit twice: the first time for a behavioural interview and the second for a practical interview. One trip means fewer logistical concerns around transport, childcare, and leaving their current workplace.

On the flip side, running two interviews in succession meant that people had to take more time away from their current role in order to participate. We had feedback from one candidate that it was a long time for them to spend away from the office.

There were three areas that we may look to improve.

Having six candidates together in the pre-interview briefing and refreshment break was awkward. These were people who didn't know each other, were competing for similar roles, and were in the midst of an intense interview process. The conversation among the group was often stilted or non-existent - though perhaps this is a positive thing for candidates who need silence to recharge?

In our usual process the hiring manager would always meet the person that was applying for the vacancy in their team. In this situation, we had individual hiring managers who were looking for multiple roles at multiple levels - junior, intermediate, senior. With the interview roadshow approach, we had some successful candidates who were proposed to a role where the hiring manager hadn't met them. Though this worked well for us, as there was a high degree of trust among the interviewers, it may not in other situations.

The other thing that became difficult in comparison to our usual approach was dealing with internal applicants. We had multiple applications from within the organisation and it was harder to handle these in a discrete way with such a large panel of interviewers. The roadshow approach to interviewing also made these people more visible in their aspirations, though we tried to place them in rooms that were away from busy areas.

Overall, I don't think that we could have maintained the integrity of our interview process for such a large group of candidates by any other means. The benefits of scaling to an interview roadshow outweigh the drawbacks and it is something that I think we will adopt again in future, as required.

I personally had a lot of fun in collating the candidate feedback, seeing which candidates succeeded, and suggesting how we could allocate people to teams. Though it is always hard to decline the candidates that are unsuccessful, I think we have a great set of testers coming in to join us as a result of this process and I'm looking forward to working with them.

Thursday, 8 June 2017

Using SPIN for persuasive communication

I can recall several occasions early in my career where I became frustrated by my inability to persuade someone to my way of thinking. Reflecting on these conversations now, I can still bring to mind the feelings of agitation as I failed. I thought I had good ideas. I would make my case, sometimes multiple times, to no avail. I was simply not very good at getting my way.

The frustration came from my own failure, but I was also frustrated by seeing others around me succeed. They could persuade people. I couldn't figure out why people were listening to them, but not me. I was unable to spot the differences in our approach, which meant that I didn't know what I should change.

Some years later, in my role as a test consultant, I had the opportunity to attend a workshop on the fundamentals of sales. The trainer shared an acronym, SPIN, which is a well-known sales technique developed in the late 1980s.

SPIN was a revelation to me and I believe that it has significantly improved my ability to persuade. In this post I'll explain what the acronym stands for and give examples of how I apply SPIN in a testing context.

What is SPIN?

SPIN stands for situation, problem, implication, and need.

A SPIN conversation starts with explaining what you see. Describe the situation and ask questions to clarify where you're unsure. Avoid expressing any judgement or feelings - this should be a neutral account of the starting point.

Then discuss the problems that exist in the current state. Where are the pain points? Share the issues that you see and draw out any that you have missed. Try to avoid making the problems personal, as this part of the conversation can be derailed into unproductive ranting.

Next, think about what the problems mean for the business or the team. Consider the wider organisational context and question how these problems impact key measures of your success. Where is the real cost? What is the implication of keeping the status quo.

Finally, describe what you think should happen next. This is the point of the conversation where you present your idea, or ideas, for the way forward. What do you think is needed?

To summarise in simple terms, the parts of SPIN are:
  • Situation - What I see
  • Problem - Why I care
  • Implication - Why you should care
  • Need - What I think we should do

A SPIN example

My first workplace application of SPIN was at a stand-up meeting. I was part of a team that were theoretically running a fortnightly scrum process. In reality it was a water-scrum-fall where testing kept being flooded at the end of each sprint.

I had been trying, unsuccessfully, to change our approach to work. Prior to this particular stand-up I sat down and noted some thoughts against SPIN. With my preparation in mind, at the stand-up I said something like:

"It seems like the work isn't being delivered to testing until really late in the sprint, and then everything arrives at once. This means that we keep running out of time for testing, or we make compromises to finish on time. 

If we run out of time, then we miss our sprint goal. If we compromise on test coverage, then we all doubt what we are delivering. Both of these outcomes have a negative impact on our team morale. At the end of each fortnight I feel like we are all pretty flat. 

I'd like us to try having developers work together on tasks so that we push work through the process, rather than individual developers tackling many tasks in the backlog at once. That way we should see work arrive in testing more regularly through the sprint. What do you think?"

To my amazement, this was the beginning of a conversation where I finally convinced the developers to change how they were allocating work.

Did you spot the SPIN in that example?

  • Situation - What I see - It seems like the work isn't being delivered to testing until really late in the sprint, and then everything arrives at once.

  • Problem - Why I care - This means that we keep running out of time for testing, or we make compromises to finish on time. 

  • Implication - Why you should care - If we run out of time, then we miss our sprint goal. If we compromise on test coverage, then we all doubt what we are delivering. Both of these outcomes have a negative impact on our team morale. At the end of each fortnight I feel like we are all pretty flat. 

  • Need - What I think we should do - I'd like us to try having developers work together on tasks so that we push work through the process, rather than individual developers tackling many tasks in the backlog at once. That way we should see work arrive in testing more regularly through the sprint.

In the first few conversations where I applied SPIN, I had to spend a few minutes preparing. I would write SPIN down the side of a piece of paper and figure out what I wanted to say in each point. This meant that I could confidently deliver my message without feeling like I was citing the different steps of a sales technique.

Preparing for a conversation using SPIN

SPIN in a retrospective

As I became confident with structuring my own conversations using SPIN, I started to observe the patterns of success for others. Retrospectives provided a lot of data points for both successful and unsuccessful attempts at persuasion.

Many retrospective formats encourage participants to write their thoughts on sticky notes. When prompted with a question like "What could we do differently" I noticed that different people would usually note down their ideas using a single piece of SPIN. Where an individual consistently chose the same piece of SPIN in their note taking, they created a perception of their contributions among the audience. 

Let me explain this with an example. Imagine a person who takes the prompt "What could we do differently" and writes three sticky notes:
  1. We all work from home on Wednesday
  2. The air conditioning is too cold
  3. Our product owner was sick this week
All three are observations, the 'situation' of SPIN that describe what they see. Though they might be thinking more deeply about each, without any additional information the wider team are probably thinking "so what?"

Similarly, if your sticky notes are mostly problems, then your team might think that you're whiny. If your sticky notes are mostly solutions, then your team might think that you're demanding. In the absence of a rounded explanation your contribution can be misinterpreted.

I'm not suggesting that you write every retrospective sticky note using the SPIN format!

I use SPIN in a retrospective in two ways. Firstly to remind myself to vary the type of written prompt that I give myself when brainstorming on sticky notes, to prevent the perception that can accompany a consistent approach. Secondly to construct a rounded verbal explanation of the ideas that I have, so I have the best chance of persuading my team.

SPIN with gaps

There may be cases where you cannot construct a whole SPIN.

Generally I consider the points of SPIN with an audience in mind. When I think about implication, I capture reasons that the person, or people, that I am speaking to should care about what I'm saying. If I'm unable to come up with an implication, this is usually an indicator that I've picked the wrong audience. When I can't think of a reason that they should care, then I need to pick someone else to talk to.

Sometimes I can see a problem but I'm not sure what to do about it. When this happens, I use the beginning of SPIN as a way to start a conversation. I can describe the situation, problems, and implications, then ask others what ideas they have for improvement. It can be a useful way to launch a brainstorming activity.

Conclusion

SPIN is one facet of persuasive communication. It offers guidance on what to say, but not how to say it. In addition to using SPIN, I spent a lot of time considering the delivery of my arguments in order to improve the odds of people accepting my ideas.

Though I rarely have to write notes in the SPIN format as I did originally, I still use SPIN as a guide to structure my thinking. SPIN stops me from jumping straight to solutions and helps me to consider whether I have the right audience for my ideas. I've found it a valuable technique to apply in a variety of testing contexts.

Thursday, 11 May 2017

Three styles of automation

At Let's Test next week I have the privilege of presenting a hands-on workshop titled "Three styles of automation". The abstract for the session reads:

A lot of people use Selenium WebDriver to write their UI automation. But the specific implementation language and coding patterns will differ between organisations. Even within the same organisation, a set of front-end tests can look different between different products.

Katrina will share three different approaches to Java-based UI automation using Selenium WebDriver from her organisation. She will explain the implementation patterns, the reasons that differences exist between repositories, and the benefits and drawbacks of each approach.

Participants will download three different suites that each implement a simple test against the same web application. Once they have a high-level understanding of each code base, they will have the opportunity to execute and extend the test suite with targeted hands-on exercises.

In this post I share the code and resources that I'll be using in this workshop. Though you won't get the same level of support or depth of understanding as a workshop participant, I hope you will find something of interest.

Background

These three automated suites are written against a tool provided by the New Zealand tax department -the IRD PAYE and Kiwisaver deductions calculator. Each suite contains a single test that enters the details for a test employee and confirms that PAYE is calculated correctly.

Each suite is reflective of a framework that we use for test automation in my organisation: Pirate, AAS, or WWW. These public versions do not contain any proprietary code; they've been developed as a training tool to provide a safe place for testers to experiment.

Each training suite was created almost a year ago, which means they're showing their age. They still run with Selenium 2 against Firefox 45. We're in the process of upgrading our real automation to Selenium 3, and switching to Chrome, but these training suites haven't been touched yet.

The three suites illustrate the fundamental differences in how we automate for three of our products. Some of these differences are based on conscious design decisions. Some are historic differences that would take a lot of work to change. The high-level implementation information about each suite is:

Pirate
  • Uses Selenium Page Factory to initialise web elements in page objects 
  • Methods that exit a page will return the next page object 
  • Has an Assertion Generator to automatically write assertions 
  • Uses rules to trigger @Before and @After methods for tests 

AAS
  • Uses a fetch pattern to retrieve page objects 
  • Provides WebDriverUtils to safely retrieve elements from the browser 
  • Tests are driven by an HTML concordion specification 
  • Uses inheritance to trigger @Before and @After methods for tests 

WWW
  • Uses Selenide as a wrapper for Selenium to simplify code in page objects 
  • Uses a Selenide Rule to configure Web Driver 
  • Uses @Before and @After methods in the tests directly

Installation

There are pre-requisite installation instructions to help you get the code running on your own machine. To get the tests executing within each framework, you may have to download and install:

  • git
  • Java
  • Firefox 45
  • An IDE e.g. IntelliJ
You can download the three suites from GitHub. If you haven't used GitHub before, you may need to create an account in order to clone the three repositories.

Comparison

The beauty of these training frameworks is that it is easy to compare the three implementations. If you are familiar with the way that one works, you can easily map your understanding to learn the others. 

In each suite you will see different page object implementation. The enterUserAndTaxDetails method in the UserAndTaxYearPage is a good example of the different approaches to finding and using web elements:

The same functionality implemented in three different ways

There are different types of assertions in the tests. Pirate assertions are created by an automated assertion generator, in AAS the English language concordion specification holds the assertions, and WWW make use of simple JUnit asserts.

The navigation through the application varies too. Pirate passes page objects, AAS implements fetch methods, and WWW simply use the Selenide open method. 

These differences are apparent when reading through the code, but to really understand them you are best to get hands-on. As a starting point, try adding to the existing test so that a 3% employee KiwiSaver deduction is selected, then make sure that deduction is reported correctly in the summary.

Conclusion

I don't claim that any of these frameworks are a best practice for UI automation. However, they represent a real approach to tests that are regularly executed in continuous integration pipelines in a large organisation. I wish that more people were able to share this level of detail.

I find the similarities and differences, and the rationale for each, to be fascinating. Given the variety within our team, it makes me wonder about the variety worldwide. There are so many different ways to tackle the same problem.

This is a whistle-stop tour of a three hour workshop. I hope to see some of you at Let's Test to have the opportunity to explain in person! If you cannot attend and have questions, or suggestions for improvements in our approach, please leave a comment below or ask via Twitter.

Saturday, 22 April 2017

Introducing testers to developers

While completing my Computer Science degree, I created a lot of software without testers. At the end of my qualification, I searched for my first role as a developer with a limited understanding of the other roles in IT that I would work alongside. I didn't know what a tester was, what they did, or how they might help me. I don't think this is an unusual position for a graduate developer to be in. 

At my first job we didn't have testers. Had I spent a longer period in that company, I may have become an experienced developer who still didn't understand what testers were. There are a lot of organisations that create software without employing dedicated testers. I don't think that a poor understanding of testers is an unusual position for an experienced developer to be in.

Developers who have never worked with testers are likely to have an understanding of testing, but as an activity rather than a role. Testing happens as part of their work rather than being lead by a separate person. Why would testing be delegated when a developer can successfully write and release quality software on their own?

If you are a developer with this mindset or history, it can be really challenging to encounter a tester for the first time. Allow me to introduce you through analogy.

No Tester

A restaurant chef is a creator. They make a meal that is delivered to a customer. Quality is determined by the skill of the chef, the quality of their ingredients, and the practices that they follow.

In many restaurants the food is made, delivered to the table, and consumed. The earliest feedback that the chef receives is from the customer. In the event that they have a negative opinion of the meal, it can be difficult to fix the problem. The damage is often done.

There are parallels to software development without testers. Quality is determined by the skill of the developers, the quality of their platform, and the practices that they follow. Feedback comes directly from the customer, who can be unforgiving.

Restaurants can succeed in this model, as can software development.

Waterfall Tester

As a student, I worked part-time as a waitress. In one of the restaurants where I worked the restaurant manager used a small workstation that was located directly outside the double doors to the kitchen.

The chef in this restaurant was temperamental and territorial, so the restaurant manager would rarely venture into the kitchen. However, during meal service, she would intercept the wait staff as we moved plates from the kitchen to our customers.

If the meals for a table were ready at different times, she would hold service until they could be delivered together. If the presentation of a dish was sloppy, she would tidy it up. In the rare event that she was unhappy with portion size or the quality of cooking, she would return the plate to the chef. 

The chef didn't like having his food returned. He would often over-correct in spite. You think the sweet and sour pork portion was too small?! Then you'll get double-sized dishes for the next hour! However he would ultimately settle into delivering a more appropriate size of meal.

On reflection, this restaurant manager was my first experience with a separate tester. Once development was complete she examined the final product. With a fresh perspective, she could identify problems that the kitchen had missed. 

The restaurant manager contributed to the quality of the product through small actions of her own or by reporting problems back to the chef so that he could make improvements. He may not have always responded well to these reports, but he did alter the meals, which improved what the customer received.

Testers bring the same contributions to software development - a fresh perspective to identify problems, fast feedback from someone with a customer focus, and the ability to make their own small contributions to the overall quality of the customer experience.

Agile Tester

The restaurant manager was improving quality of a finished dish. What if there was a way to improve the meal as it was being prepared? Then a chef could adapt as they worked, rather than trying to alter their end result.

Sometimes there are multiple elements of a meal cooking at the same time. As a chef, it can be difficult to keep track and sometimes things get burned. What if someone else had set a timer, noticed when it had lapsed, and could intervene?

Sometimes there ingredients in a dish that look similar and might be easily confused. This week I prepared a meal with two spices: tagine spice mix and couscous spice mix. What if someone else had noticed when I opened the wrong packet?

Sometimes there is a requirement for consistency between meals. How can a chef be sure that the crème brûlée they create today is the same as yesterday? What if someone else could check that the recipe, portion size, and presentation met a minimum standard?

Hypothetically, this all sounds reasonable. Realistically, if you're a chef who likes to cook alone, this might sound like a nightmare. Perhaps you would rather have to throw away the occasional plate of food that was burned or tasted strange than accommodate another person in your creative space.

There is a parallel to the role of a tester in agile development. Success relies on collaborative working relationships that can be difficult to negotiate. Dealing with feedback from a tester can be frustrating, particularly when a developer is not used to receiving it. The tester will need to explain and demonstrate how their perspective can contribute to an improved product. 

It can be difficult for the developer to adapt their approach and accommodate feedback. If they find that the information from a tester is not relevant, timely, or delivered in a constructive manner, then they should let them know. It takes time to form a useful working relationship and investment is required from both sides.

Every Tester

Whether a tester is contributing in agile or waterfall, they want to help deliver the best possible product to customers. If you are a developer who is encountering a tester for the first time, the first thing to assume is positive intent.

Testers bring a fresh perspective to identify problems, fast feedback from someone with a customer focus, and the ability to make their own small contributions to the overall quality of the customer experience. Allow them to influence the product. Ultimately they help to create a better outcome than what you could achieve alone.