Managing and Achieving Goals

This is my contribution to the Ministry of Testing January 2021 Blogger’s Challenge.

It’s generally agreed that for someone to have the best chance of achieving something, they must recognise it as a goal. But that’s just step one; once you’ve decided on a goal, how do you go about managing it? Deciding this can be a big obstacle to making progress, and making the wrong decision can be fatal for your goal.

There are lots of books, articles, courses and videos available that insist they have the answer to your goal management problem, that following their technique will unlock your capability to achieve your goals. Over a number of years, I’ve found that many of these contradictory assertions are indeed right – they were the best way for me to achieve that goal.

I believe that the ‘best’ goal management technique varies, not just from person to person, but also for individuals over time and depending on their goal. Therefore, in this blog post, I’m not going to discuss any particular technique, but instead pull out some key learnings that I’ve taken away from across the various frameworks I’ve seen and used. These are points that I’ve found critical when striving for my goals and supporting others to achieve theirs.

  1. Own it: This is your goal and you need to understand that only you can make it happen. It’s no good to agree a goal with your team or your manager, and then hope that it’ll just come true or that someone else will make it happen. You need to own your goal and put in the work to make it happen.
  2. Be committed: You need to believe that you can and will achieve your goal. If you don’t believe you can achieve it, or if you think of it as more of a WIBNI, it won’t happen.
  3. Have clarity: Understand what your goal really is. Take the time to really think it through and be honest with yourself: what is it you actually want to achieve? Once you’ve done that, double check: if this is what you achieve, will you be satisfied?
  4. Outline the path to success: Once you’ve set a goal, you need to have an understanding of how you will get there. Personally, as a visual thinker, I find that the best way to approach this is to position my goal at the top of a pyramid, and then to fill in the lower layers as building blocks required to reach that peak. The middle layer consists of a deeper dive into what my goal actually means, and the third layer contains actions which push me towards an element in the second layer (so, in the lockdown-inspired example below, all the actions in the bottom layer will push me towards the “understand and master houseplant care” objective – in a complete plan, there would be actions corresponding to the other objectives, too).
Layer 1: What is 
your goal? e.g., to become a houseplant maestro.
Layer 2: What does that actually look like? e.g., understand and master houseplant care, be familiar with common houseplant varieties, own a range of thriving plants, create interesting and attractive houseplant displays.
Layer 3: What can you do to get there? e.g., considering the "understand and master houseplant care" objective: buy and read a houseplant guide, watch video on propagation and attempt it with spider plant, collate advice received in notepad, sign up to online blogs/newsletters and read each week.
Example of how you might use the pyramid model I’ve proposed to outline a plan for achieving your goal.
  1. Continuously iterate: As you make progress, you’ll learn new information: new and different ways to make progress, reasons things you’d planned won’t help you progress, or even realisations that you want to adapt your goal to achieve something different. Make sure you give yourself regular opportunities to recognise this new information, think it through properly and integrate it into your plan.
  2. Get the resources you need: Identify what you need to be able to achieve your goal. What time, money, information, equipment, access to people, or skills do you need? Do you have access to this? If not, make sure you work out how to get it!
  3. Share your ambitions: Tell others about your goal. If they know, they’ll be able to help. I often hear people asking whether or not it’s a good idea to mention their ambition to their team, mentor or manager. Unless your environment isn’t a healthy one, I believe it’s a no brainer! If someone’s there to help you, they can only do that effectively if they understand what ‘help’ is. Also, by sharing your goal, you unlock the possibility of accountability partners; these can be invaluable when it comes to progressing a difficult plan.

And one last piece of advice, to summarise how I started this post…

  1. Innovate: Don’t just stick to the same system blindly; you might get too used to it and stop finding it motivational, or it might not work for a specific goal you have. Ensure you regularly assess whether you’re making the progress you want to, and if not, you may need to try a different approach!

Risk is…

Risk adds complexity to all software engineering projects. It’s difficult to define; some define it subjectively as an uncertainty, others define it mathematically as a function of impact and likelihood. However you define it, there’s no one-fits-all answer in terms of how to approach it, yet we find it shapes our whole test approach: what we test, when we test and how much we test.

When I read that the Ministry of Testing July/August bloggers club challenge was “Risk is…”, it took me a while to decide exactly what to write about. Eventually, I decided to use the definition that a risk is a potential mode of failure to explore risks that have been realised as actual failures in my recent projects. I hope that these anecdotes either teach you something and stop you making the mistakes we made, or raise a smile as you remember the time that you already did!

Risk: Not enough time to investigate all the bad smells

Whilst testing a recent release on a tight timeline, we ran our regression tests and found some issues. One issue in particular, we investigated and traced it back to issues with our test network configuration; some settings had been changed and were now inconsistent with some others. With this set-up, we couldn’t expect the clients to work. Having debugged the issue this far and knowing that it would not be trivial to resolve, we made a call to move the issue onto our backlog for after the release deadline and focus our remaining time on other issues raised by the regression testing.

Unfortunately, this proved to be the wrong call in this case. Before we had a chance to pick the task up from our backlog, issues raised in the field highlighted that there was a regression in this functionality. The broken configuration was hiding a bug. We could have found this had we decided to fix the configuration and rerun these particular tests, however, due to time pressures with the project, we had decided against this. The risk that we would miss some issues due to our tight timeline came true.

Risk: Too many variables to test exhaustively

We develop very complex products; our front-end clients expose a range of different features based on the combination of configuration set at the levels of individual users, their businesses and their service providers. This results in a vast test space; the skill of our testers is to analyse the potential interaction between different variables and prioritise those against the quality criteria a project is aiming for. In other words, we evaluate the likelihood and impact of an issue, and use this, the risk, to direct our testing. Inevitably, we get this wrong sometimes, and in a recent project, we missed a key interaction resulting in a crash after upgrade for users with a particular setting.

Risk: Getting the balance between conflicting requirements wrong

Due to the range of servers in our solution, we often try to limit the number of servers that an enhancement impacts in order to simplify the version compatibility requirements and complexity of the delivery to customers. This was important angle in the design of a recent project we delivered; however, once this hit the field, we saw the impact this decision had on diagnosability – something we’d not spotted in testing! An intermediate server was not changed, and therefore, despite the various new error cases we could hit, it always sent the same error code to the client. As a result, the diagnostics our support team received from clients (which is the easiest information for them to access) appeared the same across a range of issues seen in the field.

In this case, we’d prioritised a requirement that was important to some stakeholders, but at the cost of others. Had we evaluated this risk more accurately and realised cost of this trade-off, we’d probably have arrived at a different decision when considering these conflicting requirements.

Risk: Test environment not simulating the real world

As hard as we try to simulate our real world deployments with our test environments, there will always be minor differences with impacts you hadn’t pre-empted. In this case, we made assumptions about user devices and network conditions as part of two fixes in a release. When combined, these assumptions simply didn’t stack up, and as different engineers were working on the two issues, nobody had a full picture of the risk involved of making both these changes.

We learn from mistakes like these by improving our internal processes, but we also improve our test practices. Testing in prod is a hot topic in the test community, but as a B2B organisation, this isn’t easy for us to do; there are thousands of prods and we don’t own them. Fortunately, we do own one of them as we dogfood our solutions and through falling foul of this risk, we resolved to make better use of our beta testing in our dogfooding program, as doing this may have saved us in this case.

Testing is like… writing a blog post

After a surprisingly busy month of lockdown meant I left myself only one evening to complete the Ministry of Testing June/July bloggers club challenge, I’ve thrown together a quick, tongue-in-cheek post about how I’d rather this month had gone…

Testing is like writing a blog post for a bloggers club. You start off with a brief; the blog title, a risk. This gives you the problem you need to tackle, but it’s up to you to decide how to approach it.

You come up with an idea; a topic, a charter. This is your decision about the direction you’d like to head. You throw around some ideas, a couple of bullet points about what might be interesting to cover. This starts to take shape, and eventually, you have a strategy, a clear idea of the key points you’d like to explore.

You embark on the task; the thinking, the testing. You switch off from what’s around you to focus on the ideas you have. Your exploration builds up a picture, a model. You keep notes to keep track of your thoughts, observations and questions. You question yourself, whether your ideas are valid, whether what you’re thinking is interesting to others; would there be value in sharing it?

You’re convinced yourself of your idea, your model; you’re at a point where you think you understand things, and you now want to convey that understanding to someone else. You begin writing your post, your test report. You need to articulate things clearly, focusing on the audience and their model of how things work. You need to focus on things that are important to them.

You’re pleased with where you’ve got to. You’ve taken your brief, have an understanding you’d like to share with people and have that down in writing. Before you share that, however, you get a second pair of eyes on your summary; in the form of a draft review, a test debrief. You run your thinking past someone else to get their insight and opinion. They might spot errors in your logic, omissions from your brainstorms and possible extensions that you could explore. After their insight, you have the confidence that you’re ready to share wider. You share the blog, the test report.

Stop #3: How testing style affects pairing

I’ve been on three stops of my testing tour so far; a busy time both personally and professionally over the last few months has meant I’ve not been able to give my tour the focus I’d hoped when I decided to embark on it. However, the three stops I have been on so far have been very interesting, educational and fun!

I’ve been using strong-style as a basis for the pairing. With each session, we’ve agreed that my pair should take role of the navigator and I should take the role of the driver.

Interestingly, I’ve found that the pairing has manifested in a different way each time. I believe there are many factors affecting this, but in this blog post, three stops into my tour, I’m going to focus on how the tester and their testing style has influenced it.

The tester I was working with for my first stop had a strong consume-first approach to their session. What I mean by this is that they did not have an outline for the session further than the charter, but instead used the information they found during one test to inspire their next test. This contrasts to how I generally work; although I also use information I discover by testing to influence the detail of the session, I generally following a high-level plan for the session that I’ve decided on before I start. I found that this difference in our navigation styles unnerved me as the driver, but as I’d taken this role, I’d agreed to step back and let my pair lead the navigation. Honestly, I found this a bit uncomfortable, and as mentioned in my blog on this session, I found keeping out of the session navigation a difficult thing to do. I was left questioning whether the differences in our testing approaches meant that strong-style pairing was not for us. Or, to fully embrace strong-style pairing, would I need to learn to trust my navigators and embrace their approaches to navigation?

Fortunately, those concerns did not reappear in my second session. This was done with a tester who started her career as a tester in the same department that I did; unsurprisingly, this had resulted in similarities in our approach to test sessions, including in the level of preparation and testing thinking done before the session. As we sat down to begin the session, she outlined her plan for how we would implement the charter, and this helped to reassure me that the navigation was under control. The session went well and we both enjoyed collaborating on some testing as a change to our normal routine.

However, afterwards, I was left wondering how much value the pairing really added! Given that she’d already prepared much of the navigation of the session, the strong-style split meant that she was less actively involved in the session during its implementation than my pair in the first session had been. The information we uncovered during testing did influence which tests we did next, but as she’d explained the aims of the session and general plan for navigation so clearly beforehand, I was able to foresee most the changes to the testing that she would make. I worried that her approach to the session meant that she’d already done sufficient navigator-level thinking. Perhaps this is a habit a tester can develop when they’re used to working on their own: if they separate out the navigator thinking and do as much as possible before the session, they can focus on driving during session implementation? Does this habit limit the usefulness of pairing in a session? At this point, I was already convinced that were many factors influence whether a session itself is right to pair test on, but I was also beginning to realise that a tester’s approach and style might influence how they should approach pairing.

My third session came in the form of revisiting an old team of mine, and pairing with one of the most experienced testers in the organisation. When I planned my tour, I hoped that this session might help to answer my question of how much value pairing can bring to a testing session, as this tester was certainly going to be able to conduct a successful session individually. We paired on investigating the behaviour of a complex system varying user configuration, by sending in seemingly straight-forward user requests and observing whether the system performed as a user might expect. We adapted the roles slightly, as the testing required observing interaction between two endpoints, and it made the testing easier if I drove and observed one of them, and he observed the other. Choosing to be flexible with the strong-style roles added to the session, making it easier to conduct the testing without taking away from the benefits that the strong-style roles were bringing. I was pleased with this use of flexibility, which was more successful than on my first tour stop (which was discussed in my previous blog post). However, when debriefing, my pair admitted that despite the value it brought to the testing, he had found the strong-style structure frustrating. He said that when we found interesting things, he had an urge to grab the controls, but not being able to do that he found his testing flow was a bit impaired. On his own, he would have ‘just tried something’, but he felt that needing to articulate his ideas was a barrier to this; as a result, he thought the testing was less exploratory than it would have been if he was on his own.

How fascinating! He described similar feelings of frustration that I felt in my first session towards losing control of navigation, but towards losing control of driving. Is it just difficult to take a step back and ignore the half of the testing responsibility, if it is so ingrained into your testing practices? Is this perhaps something it takes a while to get used to with strong style pairing? Or do some people just find losing control over half their testing harder than others? My other thought is that perhaps some testers are more attached to one of the strong-style roles; would this last pair and I have been happier if I was navigating and he was driving? It seems like I’m definitely still at the stage where I’m finding more questions on my tour than I’m answering, but I hope to answer these as my tour progresses!

Stop #1: A lesson in pairing

Last week, I embarked upon the first paired testing session of my testing tour. As I’ve only done a couple of paired sessions in the past, I felt I needed to focus on the mechanics of this in my first session, so that I can decide how to best to structure future sessions in the tour. Therefore, I started my tour fairly close to home, in a team with a similar role and product set to my own current job. This minimised the effort spent getting my head around new context and maximised my focus on the effectiveness of the pairing.

We set out to use strong-style pairing, with my pair as the navigator as he had the domain knowledge and context for the session, and me as the driver. I quickly learnt that the job is not done once you’ve agreed on a pairing style; it takes active effort throughout the session to stick to it.

During the session, we hit a problem with the set-up that we needed to investigate, and it was whilst debugging this that we first slipped out of the strong-style pairing. At first, I tried to ensure we both stayed involved, but it was clearly much more efficient for the tester with familiarity of the test rig to debug. I have a general concern that during my tour I will be detrimental to my partners’ efficiency without adding sufficient value to offset this. Because of this worry, and as I couldn’t see any value in me driving the debugging, I let my pair press on with the debugging on his own. This became the first question I want to answer on my tour: how do we (or can we) add value to debugging by pairing?

At another point in the session, our pairing broke down in a very different way. This time, it began with us straying into an area where I had more testing experience, and so I was able to contribute navigation ideas. We organically started switching pairing roles based on who had an idea (i.e. “I’ve got an thought, why don’t you try…”). This flexibility felt positive and we seemed to be making good headway in our testing. However, it began to lead to confusion over who was navigating and how we should progress. We found ourselves formulating test ideas together, which meant it was unclear who should drive them, and also both coming up with conflicting ideas of where to head next. I realised that I had not thought enough about how the session could work with role-switching, having assumed that I would remain the driver throughout.

So, did I achieve what I wanted to with the first stop of my tour? I think so, yes. I learnt a lot about strong-style paired testing, and I’m very hopeful that the observations, lessons and ideas I gained will mean I’m better prepared to focus on other things in later sessions. I have since written an FAQ-style list to send to my future testing pairs in advance of our sessions, so that they know what to expect from me and what I am expecting from them. I plan to update it after each session, and will post ideas from it here (and perhaps the full list, when I’m completely happy with it). Overall, I’m glad I’ve had the chance to work some of these details out before I stray much further from my comfort zone!

Going on tour

I am going on a testing tour. This was inspired by Elizabeth Hock, who talked about her own testing tour at TestBash Brighton. Her purpose was to “become a better skilled tester” and she believed that pairing was good a way to do this.

I have decided to go on my own testing tour because I think it will help my development as a tester in a variety of ways. My plan is to pair up with other testers on their work, roughly once per month. Most sessions will be internal to my company, although I have plans to extend it if it goes well.

I hope to learn:

  • about new products and technology, widening my knowledge of my company’s product set. This will be helpful as I am now working on a solution testing project which includes many products, some of which are new to me.
  • how best to test the products in my solution. I’ll hopefully have lessons to bring back to my team and apply to my day-to-day work.
  • some practical testing techniques, tools and aids. How do other testers structure their sessions, keep notes and set up their environment?
  • some interesting testing theory. How do other testers ensure their test sessions achieve what they set out to? How does solution testing differ from product testing? How do other test teams choose what to test?

I also hope to develop my confidence as a software tester. Until this year, I had not tested anywhere other than one system test department, where a lot of the techniques and processes have been long established. For most testing tasks, there was a precedent for the right way to do it, and I know that this heavily influenced my testing skills and style. It would be encouraging for me to see that the skills I’ve learnt are transferable to other areas and types of testing (or useful to know if not!).

I’ll be documenting the progress of my tour in this blog. Please check in to join me on this journey and please engage with my posts and share your own experiences. I’m sure that I’ll be finding lots of questions, as well as answers, and if you can offer advice on any of my musings, I’ll be very grateful.