Toby Martin

LinkedIn

Toby is the QA Practice Lead at Pushpay, with over 25 years of experience in testing and quality practices. Throughout his career, Toby has worked on a diverse array of technologies, including embedded power systems, medical robotics, and SaaS payment solutions.

With a wealth of knowledge and hands-on experience, Toby continues to be deeply involved in all levels of QA, advocating for risk-based testing and shift left quality procedures.



Stop automating, start investigating: a guide to slowing down to speed up

In an environment where code releases occur 15 times per day, it's natural to assume that increasing Test Automation is the way forward, however I believe differently.

In an industry where sheer quantity of tests is often prioritised, at Pushpay we champion quality over quantity. Learn how we're implementing a risk-based approach, shifting left, and embracing a methodology that underscores the value of slowing down to ultimately accelerate the development process.

Transcripts

[Aaron Hodder] So, Toby is the QA Practice Lead at Pushpay.  He has over 25 years of experience in testing.  

Here to give us his perspective on a risk based shift left based approach can look like, please welcome me in joining Toby Martin.

[Applause]

[Toby Martin]  Kia ora.  Good morning.  

Before I get too far into this...  Stop automating.   

[Laughter]

Well as I said, before I get too far in, I wanted to take a very quick opportunity to thank Camy, Aaron, and Prae, for this opportunity to share my thoughts with you. Mostly because I know that this is going to rub some people up the wrong way.

[Laughter]

And that's okay.  But, what I will ask is, bear with me through until the end.  I'm hoping there's a few gems in here you can take away, apply, and get some value out of.  

So, who am I?  

Well, as you know, my name's Toby.  I've been in the industry for 28 years.  And, I am the quality Practice Lead at Pushpay.  

I actually started out as a Test Automation Engineer, working on embedded systems microsystems, back when I actually had a chin.

[Laughter]

I've done a lot of other technologies: flight automations, very very cool medical robotics… That was a fun time.  And I'm now in SaaS payment solutions.  

Throughout my career, I have tried very hard to focus on quality, not just testing.  So, what I aim for is looking at the bigger picture of quality.  

Where are all the bottlenecks?
Where are the things that slow us down?  

I want to share some of my thoughts, some of the learnings I've had over 28 years, and especially on a couple of things that still bother me.  

I have seen a great many technologies come and go, over the years.  And, we still have a few of these things hanging around, that I wish would go.  So, hopefully we can tackle some of that today, as well.  

So, I'm going to cover two topics.  

Some industry problems.  No one likes to hear about the problems, but we're in QA, that's our job.

[Laughter]

And then I want to talk about providing value.  Now, you'll hear me use the word "value" over and over.  There's a reason for it.  

So, industry problems.  We are a young industry.  Now, to talk about these problems, we actually have to go back in time, a little bit, and look at the history of software itself.  

So, back in 1948...we started writing software.  And at this time, there wasn't any QAs. Like most industries, QA is late to the game. People build products, and, whoops, we better make this usable for our customers. 

So 1948, we started writing the first software. It took another 10 years before the idea of having a testing team…of actually formalising testing of software.  

It was another 20 years before we decided that, hey, let's not just test the software, let's check it against a set of requirements that the customer wants.  

This is where our first problem actually starts to arise: requirements based testing.  

You see, functional testing requires that you have code already written and then you test that against a set of requirements.  

Most often, in a timeline that has already been squeezed. When you're rushing around and you're trying to fit in thousands of tests at the last moment.

There was a study done a while back on, what it costs to find an issue in code.  And, as you can see, I'm hoping most of you have already seen this graph somewhere.  It should be reasonably well known.  The minute you get past coding, the idea of testing software becomes more and more expensive.  

God forbid a customer finds an issue, because that's going to cost you 600 odd times what it would have done, if you caught it earlier.  

Now, this is the first problem with requirements based focused testing. You're waiting for that code and you're testing it in the most expensive part of the cycle.  But unfortunately, it's not the only issue.

There is another issue where the requirements focused testing and that is quite simply, it leads to bias.

I always catch myself at this point in the speech because my full name is Tobias.

[Laughter]

So, this idea of bias. When you are focused on a set of requirements, your testing and your brain is naturally going to go towards testing the requirement, and just the requirement. That's what I have to do, check the box.  Requirement is done.  Whoops.  Accidentally gone backwards on ya.  

The problem with that is, you miss all of the other stuff that the code is doing. Okay?  

Your focus, when it's on a requirement, doesn't see the impact to the rest of the system. 

Your focus doesn't think about, what are the other things in the requirements that weren't written down in the first place?  

So whoops. 

 This requirements based checking has been a problem for a long time. And I still see it in our industry today. We need to move away from focusing on requirements and become a lot more nimble in our approach.  

The next big problem...in 1985...we found...the great hammer of automation.

[Laughter]

This will be fantastic, we thought! 

We've got tight timelines, we're trying to cram all of these tests into it, especially in agile, where you cram all of your tests and a two week sprint.

But, hang on, this tool can squeeze them all down, can run them really, really fast…never mind the fact that you have to still write all those tests, and debug them, and get them working. 

And it was seen as a great saviour.  And there's a lot of automation out there now.  Problem is...when all you have is a hammer, everything looks like a nail.  

[Laughter]

And we've been going down this path since 1985, as a young industry, where we have this nice little hammer, and we're trying to hit every single thing that we can with it.  

Now to be clear, and so that some of the sponsors of this conference don't lynch me at the end of this talk,

[Laughter]

there is nothing wrong with automation. There is nothing wrong with automating a test. 

If you want to build a regression test, go for it.  It's valuable.  It's worthwhile.  And there are a great many tools out there that do a good job, but you have to remember that they are a tool for a specific job.  

When you try to apply automation to everything, that's a problem. 
When you rely purely on automation, that's a problem.  And it's a problem that I still see in our industry today.  

To give you an example of how badly this can go.

I was hired as a test manager a number of years ago, for a company that was selling software into very large American firms. I'm not going to name them, pretty sure they'd fly over and kill me. 

But, being a large firm, they all had their own internal acceptance testing teams and we used to release code to them. The company was releasing code to them before I got there. And on every release, their internal teams would come back with pages of problems.  

And we couldn't understand it. We had masses of automation, huge databases covering all of the clients' data and thousands of tests, testing each individual piece of functionality. The automated tests work, so why is this still a problem?  

My first job, when I went there, was to actually throw out all of the automation. Which made some people at the company a little bit nervous, but that's okay.  

We threw out the automation and we went back to the basics. 

We looked at the original requirements.
We looked at the use cases.
We looked at how the customer was actually using the product and what else the product was actually doing while we were investigating.  

The subsequent release was the first release in that company's history where all of the customers came back and said...what did you do? We didn't find any bugs in this one.  

Quite simply what we did is we stopped relying on test automation. 

We did go back and we put in some automation for key areas of regression, but the reliance on it, not worth it. 

You see, the real superpower in the world of testing, is not an automated tool. It's you.  Every single one of you.

Your ability to think, your ability to reason, that ability that you have inside of you to go, huh, I wonder what happens if I do this...and then look at the result and follow that through, investigate. 

That is worth more than any test automation tool ever will be.  

So, how do we use your brains to provide more value? 
How do we speed up deployments if we're not relying on automation?
And, how do we improve the quality of the code being delivered? 

At Pushpay, we're looking more at quality versus testing. 

We're trying to get our QAs to be a bigger part of the solution and get them in nice and early. So, as you can see here, there are four areas that we're concentrating on.  

Shifting left, you've probably heard that term; risk based approach to testing, that might be a new one; W.O.M.Ming, I'm pretty sure you've not heard of W.O.M.Ming; and, bug bashes. Not with baseball bats, don't worry.  

So coming back to this graph that we talked about earlier. Looking at where it's expensive to find bugs, and fix them and the cost over time. This is a really great piece of investigation that was done.  

The problem is, it missed something. It looked at where developers inject errors. 

What it's missing is the rest of the SDLC.  It's not just development, it's based on requirements. 

Every developer writes code to a set of requirements.  So, why are we not testing the requirements?

Seems fairly obvious that that's a good place to find issues. 

Are the requirements complete?
Are they contradictory?
Have they forgotten about other user flows?
What's the effect on the rest of the system?

You can look at that, in the requirements, and go, hmmm, yeah, I see the requirement. If I logically follow it through, we're going to be pumping a whole bunch of data at a database that is already overloaded. Is this requirement actually a good idea or should we be doing something a little bit different? Let's find those issues up front.  

So, shifting left. Slowing down to speed up.

As I said, we want to spend that time, up front, analysing the requirements. Finding out what's useful. 

Looking for impact on other areas of the code. And, performing a risk analysis. This feeds into our next piece. The idea at Pushpay of a risk based approach to testing.  

We know, I think you'll all agree with me, that all software has risks and you will never find every bug. 

Is there anybody in this room who thinks they can find every bug in a piece of software?

[Laughter]

Okay, I'm talking to mostly sensible people.

[Laughter]

So, yeah, you won't find all the bugs. 

What we want to do is, do the initial analysis, and figure out which bits of this code or which bits of this development are risky; risky to the customer, risky to the company, risky to your infrastructure. That's where we want to spend our time testing. 

You don't need to cover all of your core requirements. And I know there's some people that, that will make very nervous. But if you're doing investigative testing, you're going to run through those core requirements anyway.  

And, trust your development team. They should not be making simple mistakes in the core requirements. You should not be writing tests to cover them. 

And if your development team is making simple mistakes in the core requirements, I would suggest you have a problem in your hiring practice.

[Laughter]

But generally, developers don't come to work to make mistakes. They come to work for the challenge, and to get things right. Give them a little bit of trust.

The other thing you want to think about is in terms of risk and finding bugs, is it better to go chasing tiny, little risks, tiny, little errors, spelling mistakes? Or is it better to go and find out that putting some data in a database is going to crash the entire system?

I'm okay with a spelling mistake on the screen coming back from a customer. I'd much rather take that call than, what happened to our entire system?

The other advantage of this is you're going to minimise the amount of documentation you're writing.  

Who, here, has written 10-12 page test designs? Big log test plans.  Anyone?  

Yeah?  A couple have. 

Yep. They're pretty horrible. They’re low value.

And I almost guarantee, you'll never use them again after your initial run through, so why are you writing them?

Do your testing, do your investigation, have a lightweight document up front that says:

This is what I'm going to be testing.
This is what I'm looking for.
These are our areas of risk. 

And then document what you actually tested. That's far more valuable than 15 pages of test designs trying, to cover every individual little point.  

W.O.M.M and pair testing is the next thing that we do.  

If you've been in the quality game for long enough, you've raised an issue somewhere, and heard this term come back to you from a developer:  Works on my machine.

[Laughter]

We're not shipping your machine though, are we?  

[Laughter]

And that's what W.O.M.Ming is.  

We flipped it on its head. When a developer has finished the initial piece of code, we go and sit with them.  

Okay, cool. Show me it working on your machine.
Show me what you have tested so far.
Talk to me about your unit tests.

And most importantly...from a coding perspective, from a developer who knows the system better than you do...what risks do you see that we may not have thought about?  

Now this requires a good amount of communication with your development team, which is another really important skill.

Getting to know your developers, getting comfortable enough to sit with them and go, show me what you did. I'm actually interested.  

You will learn about code. They will learn about testing.  Especially when you talk to them about, what tests did you run and they run through the really simple stuff and you go, well, did you think of anything else? That's a good opportunity for everyone to learn.  

And, it instils in them, the idea that they need to be thinking about quality as well. Quality is everybody's responsibility.  

And the last one is the Bug Bash. I love these ones.  

Because it gets everybody involved. A Bug Bash is not quite what it sounds like.  We're not running up the code and doing a deep dive to try and find as many bugs as possible.  

We find bugs, that's fine. But the idea, here, is to get people from outside of the team: management, product, other developers, grab the barista from down the street, I don't care. They will have a different view on it.  

As I said, we're not looking for bugs, we're looking for, how do all of these different people use the new piece of functionality? Because we've written it and we've tested it to the best of our abilities without bias, but there will be bias in there. Because you know the code, you know how it's supposed to work.

Pull in somebody who doesn't know how it works, put it in front of them and say, have a go. And quite often, you will find nice, little edge cases that nobody's even thought about.

So, in conclusion of all of this, stop automating.  

[Laughter]

Please. Stop relying on just the requirements. 

There is so much more that you can do to provide value. You're an incredibly smart group of people and sitting there, checking boxes of requirements is a waste of your time. You can do a lot better.

A lot of this has come from my own experience over the years and more recently, I discovered a course called RST, Rapid Software Testing. If you've not seen anything on RST, I highly recommend you go and have a look at it. 

There is a book coming out November, this year, Michael Bolton and James Bach, are writing a book on the RST process. Really looking forward to that.

Elisabeth Hendrickson's book on Exploratory Testing, another great book to have a read of.  

And finally, if you do have anymore questions, or you're a hard core automated tester and you just want to throw things at me, 

[Laughter]

I'll be available in the breakout after lunch, or you can find me on LinkedIn. Thank you very much.

[Applause]

Recommendations

References