PostHole
Compose Login
You are browsing us.zone2 in read-only mode. Log in to participate.
rss-bridge 2024-09-05T00:45:00+00:00

SE Radio 632: Goran Petrovic on Mutation Testing at Google

Goran Petrovic, a Staff Software Engineer at Google, speaks with host Gregory M. Kapfhammer about how to perform mutation testing on large software systems. They explore the design and implementation of the mutation testing infrastructure at Google, discussing the strategies for ensuring that it enhances both developer productivity and software quality. They also investigate the findings from experiments that quantify how mutation testing enables software engineers at Google to write better tests that can detect defects and increase confidence in software correctness. Brought to you by IEEE Computer Society and IEEE Software magazine.


Goran Petrovic, a Staff Software Engineer at Google, speaks with host Gregory M. Kapfhammer about how to perform mutation testing on large software systems. They explore the design and implementation of the mutation testing infrastructure at Google, discussing the strategies for ensuring that it enhances both developer productivity and software quality. They also investigate the findings from experiments that quantify how mutation testing enables software engineers at Google to write better tests that can detect defects and increase confidence in software correctness.



Show Notes

Related Episodes

  • SE Radio 474: Paul Butcher on Fuzz Testing
  • SE Radio 609: Hyrum Wright on Software Engineering at Google
  • SE Radio 324: Marc Hoffmann on Code Test Coverage Analysis and Tools
  • SE Radio 317: Travis Kimmel on Measuring Software Engineering Productivity

Transcript

Transcript brought to you by IEEE Software magazine and IEEE Computer Society. This transcript was automatically generated. To suggest improvements in the text, please contact [email protected] and include the episode number.

Gregory Kapfhammer 00:00:18 Welcome to Software Engineering Radio. I’m your host, Gregory Kapfhammer. Today’s guest is Goran Petrovic, a staff software engineer at Google. Goran focuses on software quality improvement through the development and evaluation of new software tools and processes. Welcome to the show.

Goran Petrovic 00:00:37 Thanks Greg. It’s good to be here.

Gregory Kapfhammer 00:00:40 Today during our episode, we’re going to be talking about Goran’s effort to introduce mutation testing at Google. We’re going to start by exploring what mutation testing is and the ways in which Google’s software engineers tend to use it. Now Goran, I know you published a paper called “State of Mutation Testing at Google,” and if you won’t mind, I’m going to read you a sentence from the paper and then invite you to comment on it. Does that sound cool?

Goran Petrovic 00:01:04 Sounds great.

Gregory Kapfhammer 00:01:05 Okay. So the paper was called “The State of Mutation Testing at Google,” and the sentence was “Mutation testing assesses test suite efficacy by inserting small faults into programs and measuring the ability of the test suite to detect them.” So at a high level, can you tell us what is mutation testing and how does it work?

Goran Petrovic 00:01:26 Sure. So everyone is familiar with the standard criteria like line coverage, and it’s been in standard in the industry for many years. And mutation testing is just a step further in this to ensure that the code is not only covered with tests but actually properly tested in terms of assertions. You could imagine that you can have a function that you call from a test and don’t assert anything. It can have a hundred percent coverage, but it’ll never catch any real bugs. And mutation testing comes to help in terms of actually validating that these tests do something useful. The big picture is that why do we even write tests? Maybe 20 years ago or 10 years ago, people were complaining that they had to do it and they said, we don’t need that. Now it’s more or less a standard, but the real question is why do you do it?

Goran Petrovic 00:02:15 And the reason is we don’t want bugs to be introduced into the code and then pushed to our users. So the best way to do that is to actually have tests that can catch these bugs. Not only that cover certain lines of code. So mutation testing is a system where you can just change the code, change the implementation by mimicking a bug in a way. Let’s say replace A plus B with A minus B, and then run all the tests. If no tests fail, there’s something wrong there. So you might have coverage, but this is not a good test. And mutation testing is a way of generating many of these different code versions with slight changes they’re called mutants, and then running all the tests on all of them and calculating the score mutation testing score, which tells you how many of these mutants your tests were able to detect. And then that’s another number along with coverage that can help you understand the quality of your tests even better.

Gregory Kapfhammer 00:03:09 So what you’re explaining is interesting, and yet it may sound somewhat counterintuitive because normally we associate testing with the process of getting bugs out of our code. But you’ve explained that mutation testing is putting bugs into our code. Can you unpack that a little bit further? Why would we want to put a bug into our code?

Goran Petrovic 00:03:29 Sure, I do that every day. Programming is an art of adding bugs to your code and then removing them and probably you never remove all of them. There’s some remaining. So you could say that 99.99% of code that I write is full of bugs and eventually it’ll have so few bugs that they won’t be so important so I can release my code. And that’s a good reason to try to insert bugs into the code because it’s inevitable that we will do it as human beings, we are not evolved to write complicated code. We are evolved to run around in the in the woods. So this is very hard to hold concentration and the bugs will definitely happen. So mutation testing is just a natural way of simulating that. You just say, imagine if you make a bug here, imagine if you inserted the bug here, what would happen?

Goran Petrovic 00:04:15 And people sometimes ask like, but I didn’t, so my code is correct, why should I care? But software engineering is an integrated programming over time. So usually if you just write a code for like a one-off script or a competition, that that’s okay. But most of the code that the companies write will live for years and years, decades even. And you as an author are just the first author. There will be countless changes happening to the code, some from your team with the people who are familiar with the code, but some from completely different people who want to add a feature or want to do a refactoring. And even I don’t understand most of the things that I’m writing about, let alone if I’m editing someone else’s code. So the chance of me introducing a bug is huge and that’s why we want to protect against that.

Gregory Kapfhammer 00:05:01 So if I’m understanding you correctly, when I put these small faults or mutants in the program and my test suite detects them, then I know that if I later put that into my program, the test suite would still detect it. Is that the right idea?

Goran Petrovic 00:05:15 It is. And of course a student might ask, I’m not going to write in a function of sum A minus B instead of A plus B. But the idea is that these small changes, these mutants, they might not look like real bugs, but then they are correlated with real bugs. So this is the core assumption if that doesn’t hold then mutation testing doesn’t make any sense, but it happens to hold. So that’s the reason why inserting these small non bug looking changes, if you prevent them, you can also prevent bugs by tests.

[...]


Original source

Reply