Skip to content

Writing Solid Code Weeks 7 and 8

The students continued with their compressor/decompressor development. Coverity continued to find plenty of issues for the students to fix. I’m about to start doing a bit of differential testing of their code, using voting to determine who is correct and who is wrong. I lectured on paranoid programming and on building a fuzzer, for sort of a loose definition of “lecturing.” The fuzzer material is great fun, it’s this broad and deep topic that ties into most other aspects of writing solid code.

{ 7 } Comments

  1. prometheefeu | March 1, 2014 at 3:14 pm | Permalink

    If I may suggest a topic that is supremely useful to write solid code in industry: saying no to absurd requirements. (Be they time or features) Management asking for impossible/stupid things is probably the number one cause of crappy code by good developers.

  2. regehr | March 1, 2014 at 4:37 pm | Permalink

    Hi prometheefeu, great point, I’ll discuss this with the class. The evil way to teach this would be to impose more and more ridiculous requirements until they rebel.

  3. bcs | March 2, 2014 at 9:04 am | Permalink

    How many of the students read these comments? Imposing a stupid requirement might be a quick way to find out.

  4. regehr | March 2, 2014 at 2:50 pm | Permalink

    bcs, good question, I’m not sure that any of them do! But certainly reading the comments would have been a great advantage for the triangle classifier…

  5. Don Bockenfeld | March 3, 2014 at 4:37 pm | Permalink

    If we’re suggesting new topics for writing solid code…
    1. dcw suggested cost estimation. If that isn’t practical, then perhaps size estimation – start predicting how large the programs will be. Use whatever units of measure are familiar and convenient: pages of source code, lines of code, semicolon count, function points, whatever. Size estimation is the first step in cost estimation.
    2. In avionics, the most safety-critical features are supposed to tolerate single-event upset — where cosmic rays flip a bit in the memory. This is mostly taken care of by hardware (single-bit-correct; double-bit-detect); but certain software design mitigations are possible.
    3. In avionics, system features are supposed to be designed/tested so that the probability of failure is less than 10^-9, 10^-7, 10^-6, 10^-3, or 10^0 depending on safety criticality. Let’s ignore the concept of ascribing a number to the probability of software failure for now. These failure probabilities have design implications; for example, a 10^-6 feature cannot rely on an 8-bit CRC.
    4. In avionics, software designed for different failure probabilities can only be combined if the design partitions the software such that no failure of the noncritical software can affect a safety-critical feature. It is helpful to visualize the noncritical software as maliciously trying to mess up the critical software.
    5. Which leads to security concerns. What design and coding idioms help to write solid code that cannot be taken over by a hacker.
    6. Hardware/software codesign to efficiently attain some particular level of circuit coverage in the self test is devilishly hard. I assume your undergrads don’t have the necessary hardware background to get into this.

  6. dcw | March 4, 2014 at 2:05 am | Permalink

    Don, its interesting to see your comments on how size estimation is generally a first step in cost estimation. In my line of work, I rarely see anybody attempt to estimate size ahead of time, but instead cost estimation tends to be about how many disparate components of the system will any individual feature modify. Generally, I find that the engineering work involved in implementing an individual algorithm effectively never dominates the cost of integrating that new algorithm into an existing complex environment. For example a feature that ends up with 4000-5000 lines of code in one component is probably cheaper to deliver than a feature with 500 lines of code spread across 3 components. (In this case, by components I refer to parts of the system with abstraction boundaries between them.)

    Also, in terms of how I consider cost estimation part of writing solid code. I’ve thought about that for a bit more, and an interesting bit about schedules comes to mind. I work on a project that is a medium size component of a very large product. Across the entire product there are thousands of engineers working to deliver their part of the project and there is an overarching schedule that specifies that it will be done on a specific date. Cost estimation becomes vitally important to generation of solid code in such an environment, because it drives understanding of how much time is available to work on any individual part of the system. (We need to do X, Y, and Z. Z is a really hard problem with a number of possible solutions. The best solution is REALLY complicated. We’d also really like to do A, B, and C.) Part of delivering solid code is knowing what it costs to deliver, so the right solution can be chosen that will fit the schedule but also be solid. With good cost estimation it becomes possible to determine that the best solution for the business is to do a really good job on X and Y, and a sufficiently solid, but not spectacular implementation of Z if that delivers the time to deliver a pretty good (and solid) version of A, B, and C. Without good cost estimation, its easy to end up in the trap of spending all of the time that would be best spent on A, B, and C instead attempting to make Z perfect.

  7. Don Bockenfeld | March 4, 2014 at 9:25 pm | Permalink

    dcw, I had assumed that the class was mostly working on a series of standalone projects – without a preexisting infrastructure to integrate with. However, that suggests another new topic: provide students with a large complex program and ask them to add a new feature to it.