1500+ Bugs from One Fuzzer


This metabug links to all of the defects found in Firefox’s JavaScript engine by jsfunfuzz. The surprise here isn’t that bugs were found, but rather that more than 1500 bugs were found in a single language runtime by a single test case generator. I’m interested in exactly what is going on here. One possibility would be that JS performance has become so important in the last five years that it supersedes all other goals, making these bugs inevitable.  Another possibility is that something is wrong with the architecture or the development process of this particular JS engine. It’s also possible that I’m simply out of touch with bug rates in real software development efforts and this kind of result is perfectly normal and expected. Regrettably, jsfunfuzz is no longer public, most likely because it was like handing a loaded gun to the enemy. Anyhow, jsfunfuzz serves as an excellent example of how powerful random testing can be.

,

4 responses to “1500+ Bugs from One Fuzzer”

  1. Currently, I think Spidermonkey has 500000 lines of code. if we assume that code churn over that time gives a factor of 2, then we have about 1 bug in 800 lines of code caught by fuzzing. For code in an unsafe language that is quite complicated, that doesn’t seem so shocking.

  2. Well, first off, in the past five years, the engine in question has gained not one but three separate JITs. Given the complexity of those sorts of things, it’s pretty much not testing one program but four. Also keep in mind that JS engines in general appear to be having an aggressive benchmark war, which means that this code is probably much higher churn than would be normally found in native code compilers. Also, a significant fraction of those errors are related to two features (E4X and sharp variables), both of which are extremely underused and unlikely to be caught in normal smoke test routines.

    Also, this fuzzer has been running more or less non-stop for years, so you’re looking at tens of thousands of hours of fuzz-testing. In addition, the JS team knows and expects the fuzzer to be running, so I suspect that people are not quite as rigorous in their testing as they could be, knowing that they have an automated tool to tell them of any pernicious edge cases (I know someone who had to land their patch five or more times because the fuzzer kept finding problems in it).

  3. Thanks for the details Joshua.

    It’s an extremely bad idea to rely on a fuzzer to find sloppy treatment of edge cases. Fuzzers tend to have blind spots, but understanding these blind spots can be quite difficult.