I do not enjoy traveling with babies. Getting car seats and strollers through security; having little crawling people disappear under a sea of chairs and come out again eating things they found; being unable to console a little one whose ears are hurting on descent. Ugh. But as the babies have grown up into toddlers and then little boys, I’ve started to really enjoy traveling with them:
- Constant stream of chatter combats boredom
- They can carry the luggage (just kidding… mostly)
- They ensure that snacks and water are always available
- Sometimes they go above and beyond the call of duty on snacks; for example, my 3 year old is liable to sneak an uneaten half of a hamburger into his pocket; he’ll offer me a bite later if he sees I’m hungry
- They get us into the “family” security line; this saved us about half an hour at SLC yesterday
- The family restrooms that some airports have are usually roomy and clean
- The kids actually enjoy air travel and are excited by jet engines; this is infectious and makes it more fun for everyone; even reading those stupid SkyMall catalogs can be entertaining if a 5 year old is commenting on it
Matt’s post is so great that I need a special post just to link to it.
I’m preparing a longer piece on compiler correctness but thought this idea was worth putting into its own short post. The question is:
Which would you choose: A compiler that crashes more often, or one that silently generates incorrect code more often?
Of course, all real compilers have bugs, and all real compilers display both kinds of symptoms. Typically these bugs start to show up when a project pushes the boundaries a bit. People who develop embedded software often encounter both kinds of bugs depressingly often.
I suspect that many readers will have a gut preference for the crashy compiler, based on the idea that it’s better to know early when something is wrong with the compiler. But consider that if you need to ship a product, the crashy compiler is likely to be a showstopper until a workaround is found. Upgrading the compiler is usually totally unacceptable, so the workaround has to be on the application side. On the other hand, a bit of wrong code in some module is probably not a showstopper: development and testing of the rest of the system can proceed.
It is sometimes possible to choose between failure modes for a given bug. Failure oblivious computing (FOC) is based on the idea that we prefer a highly available system over a crashy one. Applied to a compiler, FOC would (at least sometimes) change crash errors into wrong code errors. I’ve talked to compiler developers who use a very FOC-like approach to avoid compiler crashes; they had a compiler pass that would walk various data structures repairing anything that looked fishy. These were smart people and they decided to spend their time masking bugs rather than fixing them. In contrast, other compiler developers use assertions to convert wrong code bugs into crash bugs, and to change dirty crash bugs (segfaults) into clean crashes. What I’m really asking here is not what kind of bugs people should put into code — we have little enough control over that! — but whether we want developers to invest in systems for making bugs show up earlier (assertions) or not at all (FOC).
At 11749′ / 3581 m, Timpanogos is the second-highest mountain in the Wasatch Range. It’s a classic Utah hike and I’d wanted to do it for years, but never managed to convince myself the extra driving was worth it when there are a couple dozen 11,000′ peaks that are closer. Basically I should have done this hike earlier, it’s probably the prettiest mountain I’ve been on. Unfortunately the only day that worked both for me and hiking buddy Dave Hanscom was a Saturday, and the mountain was really crowded. We were moving fast and passed literally 200 people in the three hours it took to make it to the summit. This was about as fast as I could move; it’s 7 miles and more than 4000′ elevation gain. Dave’s a trail runner and could have gone faster. On the way up we also passed dozens of people on their way down: it’s apparently common for people to start this hike around 1am — a pretty hardcore alpine start for the obviously non-mountaineering demographic! At the top it was not just crowded, but cold; we had to put on gloves within a few minutes.
Instead of descending the Timpooneke trail that we had climbed, we traversed across the summit ridge on a slightly exposed trail that leads to the top of the “timp glacier,” which was crevassed during the early 20th century but is now either a rock glacier or simply a glacial remnant. We had hoped to lose altitude quickly by glissading but the snow had so many rocks embedded in it that we mostly had to walk. At the bottom of the snow we refilled water bottles and descended the Aspen Grove trail — the other major route up and down this mountain — until we reached the Hartsky cutoff which heads back across the mountain toward where we started from. This trail follows what looks like an old, nearly-overgrown mining road and is hard to follow in places. However, the scenery was excellent and, in stark contrast to the major trails, we saw nobody until getting back to my truck. I was really happy Dave knew about this cutoff, which is described here; these seldom-used routes in the Wasatch are always my favorites.
Ten years ago today Sarah and I and the cats woke up somewhere in western Kansas and drove to our rental house in Salt Lake City:
A lot has happened since then — marriage, kids, tenure — but still, it’s hard to imagine that more than a quarter of our lives has been spent in Utah.
Our little Honda (in the driveway in the picture), which we drove for five more years, turned out to be poorly adapted to life in the desert. Black both outside and in, it got extremely hot. Its minuscule ground clearance turned moderate roads in Southern Utah into epics. It was not very secure and got broken into at least twice before finally being stolen from in front of our house (we got it back).
In industry it’s often pretty easy to know when to stop working on a project: you might get moved off the project, it might get canceled, etc. In academia, it’s less clear: I can stop working on something after half an hour, or else I can work on basically the same idea until the end of my career. This piece is about avoiding the problem of quitting a project too early; working on a project too long is depressing to see, but probably not interesting to discuss. Let’s look at a few anecdotes.
- About five years ago I was at an NSF workshop concerning high-confidence software for medical devices. The cool thing about this kind of workshop is that it mixes up attendees from academia, industry, and government. Anyway, this one very smart, well-respected researcher stood up and basically said “we’ve solved all the important system verification problems; you people just haven’t picked up the work and used it.” This really made my jaw drop: you’d be hard pressed to find even one problem in the area of embedded system verification that has actually been solved, much less all of them. Second, even for problems that are solved in some theoretical sense, this is like 2% of the real solution, which is giving people a way to verify real systems in a cost effective way. Anyway the implication that bothered me here is that the academic’s responsibility ends once the theorems have been proved.
- A few months ago I was chatting with Eddie Kohler and randomly asked him why he thought Click had been successful (my guess would be that Click is in the top 1-2% of most influential systems software projects from the past 20 years). His answer was simple: he kept supporting it. Of course there’s a lot more to it than that, but I’d imagine he’s onto something.
- Not too long ago one of my students got a piece of software working and figured it was time to move on a project he considered to be more interesting. This bugged me. I mean, there are only two cases here: either the project is irrelevant or else it matters to the world. If it doesn’t matter, why were we wasting time on it at all? If it does matter, then you need to keep working on it until you’ve made life better for the people who stand to benefit from the project. This means actually running the tool, seeing where it works and where it falls over, fixing areas where it doesn’t work, releasing it, evangelizing, and all that.
Am I claiming that academics should always productize their research and keep supporting it? No– definitely not. The ability to abandon any project at any time (subject to student and grant issues) is one of the great advantages of being a professor.
I’m arguing that the “generating ideas” part of research is over-rated. The important thing is to have just enough good ideas — one of my colleagues likes to say you only need a good idea about every two years — and then to build a competent research program based on those ideas. Learning when to quit and when to persevere is an important, under-rated skill. I’m not that good at this myself, having quit projects both too early and too late in the past.