Skip to content

Why Verify Software?

People like me who work on software verification (I’m using the term broadly to encompass static analysis, model checking, and traditional formal verification, among others) like to give talks where we show pictures of exploding rockets, stalled vehicles, inoperable robots, and crashed medical devices. We imply that our work is helping, or at least could help, prevent very serious software-related problems. It’s not clear that this sort of claim stands up to a close examination.

What would it take to show a causal link between verification efforts and software safety? A direct demonstration using a controlled experiment would be expensive. An indirect argument would need several parts. First, we’d have to show that flaws revealed by verification efforts are of the kind that could compromise safety. Second, we’d need to show that these flaws would not have been found prior to deployment by traditional V&V — those bugs are merely expensive, not harmful. Third, we’d need to argue that a given dollar spent on verification adds more safety than that same dollar spent on other ways of improving software. Finally, we would need to argue that a successful verification effort won’t have unintended consequences such as emboldening management to substantially accelerate the schedule.

Of course, none of this criticizes software verification research, which I work on and very much believe in. We simply need to be clear about its purpose, which is to reduce overall cost. A top-level goal for software verification that fails to mention cost (for example “minimize damage caused by bugs in software intensive systems”) is untenable because obviously the best way to minimize such damage is to radically simplify, or even eliminate, the software. Of course, in practice we do not wish to radically simplify or eliminate the software because it brings so many benefits.

A more reasonable high-level goal for software verification might be “increase, to the largest possible extent given the methods available, total system utility.” “Total system utility” has both positive and negative components, and verification is mainly about mitigating some of the negative components, or costs, including not just development and manufacturing costs, but also maintenance and accident-related costs. In the next couple of days I’ll post a more specific example where the cost-based analysis of verification is much more useful than the feel-good analysis promoted by exploding rocket pictures.

{ 4 } Comments

  1. Eric Eide | June 20, 2011 at 4:40 pm | Permalink

    Re: “reduce overall cost.”

    I’ve heard it said than an engineer is someone who can do with one dollar what any fool can do for two.

    We are software engineers.

  2. Andrius Bentkus | June 20, 2011 at 5:22 pm | Permalink

    Capitalization is not always the best scale to rate your research.
    A lot of stuff has been invented only because it was interesting (lasers for example), where at first capitalization seemed impossible. Now we have a giant laser industry.
    Another good example is the american stealth bomber. The math behind radar deflection was done 20 years before that by a Russian mathematician.

    Capitalization came, but just a lot later.

    Rating your research according to the possibility to capitalize right away is not always the best way, sometimes it takes time.

  3. regehr | June 20, 2011 at 6:13 pm | Permalink

    Andrius, I generally agree. Even so, it’s important to embark upon research projects with a clear view of what might emerge as a successful outcome.

  4. Edward Z. Yang | June 21, 2011 at 11:08 pm | Permalink

    A frequent argument fielded against formal methods is that the most formalizations make assumptions and prove properties that may not actually be what we want (consider the moving requirements in cryptography as one example.) I’m curious to see how your cost-based analysis factors that in: cost-analysis is important, but notoriously tricky due to all of the factors it has to account for.