Book Review: Street-Fighting Mathematics

The Trinity test occurred on a calm morning.  Enrico Fermi, one of the observers, began dropping bits of paper about 40 seconds after the explosion; pieces in the air when the blast wave arrived were deflected by about 2.5 meters.  From this crude measurement, Fermi estimated the bomb’s yield to be ten kilotons; he was accurate within a factor of two.  Although Street-Fighting Mathematics does not address the problem of estimating bomb yields, it gives us a reasonably generic toolbox for generating quantitative estimates from a few facts, a lot of intuition, and impressively little calculus.  As one of the reviews on Amazon says, this book makes us smarter.

Street-Fighting Mathematics — the title refers to the fact that in a street fight, it’s better to have a quick and dirty answer than to stand there thinking about the right thing to do — is based on the premise that we can and should use rapid estimation techniques to get rough answers to difficult problems.  There are good reasons for preferring estimation over rigorous methods: the answer is arrived at quickly, the full set of input data may not be needed, and messy calculus-based or numerical techniques can often be avoided.  Perhaps more important, by avoiding a descent into difficult symbol pushing, a greater understanding of the problem’s essentials can sometimes be gained and a valuable independent check on rigorous — and often more error prone — methods is obtained.

Chapter 1 is about dimensional analysis: the idea that by attaching dimension units (kg, m/s2, etc.) to quantities in calculations about the physical world, we gain some error checking and also some insight into the solution.  Dimensional analysis is simple and highly effective and it should be second nature for all of us.  Too often it isn’t; my guess is that it gets a poor treatment in secondary and higher education.  Perhaps it is relevant that about ten years ago I went looking for books about dimensional analysis and found only one, which had been published in 1964 (Dimensional Analysis and Scale Factors by R.C. Pankhurst).  If Mahajan had simply gone over basic dimensional analysis techniques, it would have been a useful refresher.  However, he upps the ante and shows how to use it to guess solutions to differential and integral equations: a genuinely surprising technique that I hope to use in the future.

Chapter 2 is about easy cases: the technique of using degenerate cases of difficult problems to rapidly come up with answers that can be used as sanity checks and also as starting points for guessing the more general solution.  Like dimensional analysis, this is an absolutely fundamental technique that we should all use.  A fun example of easy cases is found not in Street-Fighting Mathematics, but in one of Martin Gardner’s books: compute the remaining volume of a sphere which has had a cylindrical hole 6 inches long drilled through its center. The hard case deals with spheres of different radii. In contrast, if we guess that the problem has a unique solution, we’re free to choose the easy case where the diameter of the cylinder is zero, trivially giving the volume as 36π cubic inches. Many applications of easy cases are simple enough, but again Mahajan takes it further, this time showing us how to use it to solve a difficult fluid flow problem.

Chapter 3 is about lumping: replacing a continuous, possibly infinite function with a few chunks of finite size and simple shape.  This is another great technique.  The chapter starts off easily enough, but it ends up being the most technically demanding part of the book; I felt seriously out of my depth (it would probably help if I had used a differential or integral equation in anger more recently than 1995).

Chapter 4 is about pictorial proofs: using visual representations to create compelling mathematical explanations where the bare symbols are non-intuitive or confusing.  This chapter is perhaps the oddball: pictorial proofs are entertaining and elegant, but they seldom give us the upper hand in a street fight.  I love the example where it becomes basically trivial to derive the formula for the area of a circle when the circle is cut into many pie-pieces and its circumference is unraveled along a line.

Chapter 5 is “taking out the big part”: the art of breaking a difficult problem into a first-order analysis and one or more corrective terms.  The idea is that analyzing the big part gives us an understanding of the important terms, and also that in many cases we may end up computing few or none of the corrections since the first- or second-order answer may turn out to be good enough.  Mahajan introduces the idea of low-entropy equations: an appealing way of explaining why we want and need simplicity in street-fighting mathematics.

Finally, Chapter 6 is about reasoning by analogy: attacking a difficult problem by solving a related, simpler one and then attempting to generalize the result.  The example of how many parts an n-dimensional space is divided into by introducing some n-1 dimensional constructs is excellent, and ends up being quite a bit more intricate than I’d have guessed.  This chapter isn’t the most difficult one, but it is probably the deepest: analogies between different areas of mathematics can border on being spooky.  One gets the feeling that the universe is speaking to us but, like children at a cocktail party, we’re not quite putting all of the pieces together.

Back of the envelope estimation is one of the main elements of a scientist or engineer’s mental toolkit and I’ve long believed that any useful engineer should be able to do it, at least in a basic way.  Others seem to agree, and in fact quantitative estimation is a lively sub-genre of the Microsoft / Google / Wall Street interview question.  Speaking frankly, as an educator of engineers in the US, our system fails somewhat miserably in teaching students the basics of street-fighting mathematics.  The problem (or rather, part of it) is that mathematics education focuses on rigorous proofs and derivations, while engineering education relies heavily on pre-derived cookie-cutter methods that produce little understanding.  In contrast, estimation-based methods require strong physical intuition and good judgment in order to discard irrelevant aspects of a problem while preserving its essence.  The “rapidly discard irrelevant information” part no doubt explains the prevalence of these questions in job interviews: does anyone want employees who consistently miss the big picture in order to focus on stupid stuff?

In summary this is a great book that should be required reading for scientists and engineers.  Note that there’s no excuse for not reading it: the book is under a creative commons license and the entire contents can be downloaded free. Also, the paper version is fairly inexpensive (currently $18 from Amazon).  The modes of thinking in Street-Fighting Mathematics are valuable and so are the specific tricks.  Be aware that it pulls no punches: to get maximum benefit, one would want to come to the table with roughly the equivalent of a college degree in some technical field, including a good working knowledge of multivariate calculus.

Book Review: Pale Fire

Pale Fire is a 999-line poem written by John Shade.  It is also a novel by Vladimir Nabokov that contains an introduction by Charles Kinbote, the poem, and Kinbote’s extended commentary on the poem.

On the surface, Pale Fire is straightforward.  The poem is a touching — but not, it would seem, terribly good — autobiography and meditation on mortality.  Kinbote turns out to be the deposed king of Zembla, a small country in northern Europe, and is hilariously self-important, generally telling his own story rather than commenting on the poem.  When he briefly returns to the poem, it is usually to misread it entirely.  Nabokov clearly had great fun writing this, and he deadpans it all the way through.

Looking a little deeper, the story is more tangled.  Kinbote is detached from reality and we begin to suspect that he is not really a deposed king, that Zembla may not exist (within the novel), and that Kinbote may not, in fact, even exist.  One thing I think we can take for granted is that Shade is dead at the end of the novel; what author would waste “Shade” on a living man?  Second, it is pretty obvious that Kinbote is insane and/or a fabrication. Towards the end of the novel the Kinbote voice begins to slip and very late the bizarre Gradus/Gray distinction is made.

In the end it seems clear that Kinbote has murdered Shade and stolen the Pale Fire manuscript, that Gradus/Gray is a total fabrication, and that Kinbote is writing his commentary while hiding from the law.  I’ve heard other explanations, for example that Gray is the murderer; this doesn’t feel right.  Also Kinbote has told us that playing word golf he can turn “live to dead in five steps,” surely not a coincidence.

Pale Fire is a wonderful puzzle. It is strongly reminiscent of Gene Wolfe’s work, Peace in particular.  My guess would be that some of the odder words used in Wolfe’s novels (nenuphar, psychopomp) indicate that he was influenced by Pale Fire.

Book Review: Surviving Your Stupid Stupid Decision to go to Grad School

Good jobs have barriers to entry.  Sometimes these barriers are natural (not everyone is capable of writing a novel or being a leader) and sometimes they are artificial (not everyone is born in the right place or to the right parents).  Many well-paid jobs requiring very specialized skills are protected by — among other mechanisms — the requirement that applicants have an advanced degree.  For some people, the substantial costs of acquiring one of these degrees are worth it, for others they are not.  Because graduate students’ experience varies greatly across disciplines, across departments, and across individuals within a specific department, it’s hard to make useful generalizations about it.

Surviving Your Stupid Stupid Decision to Go to Grad School, nevertheless, does a good job capturing and satirizing important universal aspects of the graduate school experience: the insecurity, the low or nonexistent pay, the whims of advisers and committees, the potential for wasting what could be productive and happy years, etc.  Because the book is targeted at people already in grad school, its purpose is not to inform, but rather to entertain and also to reassure people that some of the more absurd parts of grad student life are, in fact, widely experienced.  No serious advice is given until the epilogue, which manages to say what is perhaps the single most important thing that a grad student can hear: don’t just sit there, take control.

Surviving is very funny in places, but for my taste it tries too hard.  The hit rate for jokes is not that high; one hopes that Ruben — who, according to the back cover, has a stand-up comedy act — doesn’t quit his day job. I read it on a plane flight; not just any plane flight, but one with my three and five year old boys, and without my wife.  As anyone who flies with kids can tell you, it’s not a good time for heavy reading.  An undistracted reader could polish the book off in an hour.

One day soon, I am going to leave this book in the lab where my grad students work.  Then I am going to walk out, chuckling in an evil way.

How to Get a Grant (from NSF, for Computer Scientists)

Learning to write good grant proposals is hard. First, while we learn to write papers in grad school, most of us only learn to write grants after completing a PhD, and possibly only after getting a faculty position. Second, the hit rates for grants are typically lower than for papers. Third, it’s fundamentally harder to write compelling text about a problem that you have not yet solved. For similar reasons, the PhD proposal is a major stumbling block for many folks. Fourth, grant review panels are smaller and have much higher variance in expertise than do top-tier conference program committees. Finally, there is less visibility into the proposal evaluation process than there is into the the paper evaluation process.

The upshot of all of these difficulties is that writing grant proposals is a bit of a frightening activity. One can spend a lot of time writing proposals before getting a hit. This is frustrating and, if it goes on very long, career-limiting. (Tenure committees usually evaluate candidates on research, teaching, and service — but all too often money is the ten-ton elephant hiding in the corner.)

This writeup is random thoughts, it’s in no way complete. You have to read the call for proposals and parts of the GPG carefully. You should talk to the program managers. You need to figure out how to work with your sponsored projects office and with your department’s internal grant-writing support people. Others have put useful information on the net; I found Cindy Grimm’s advice to be particularly good.

Writing a Proposal

To be funded, a proposal has to receive good reviews from respected researchers in your field. To get good reviews, there are three main things you need to show: a vision, a plan, and a capability. Let’s look at these in more detail, but first note that these don’t really match up with the parts of an NSF proposal as described in the Grant Proposal Guide. Rather, these are things that need to come out of the proposal as a whole.

Vision

The median annual income for a household in the United States is around $50,000. When you write a grant proposal, you’re typically asking the government to give you 2 to 100 times this amount. Think about it: every dollar you request will have to come from some hapless taxpayer (or corporation, or increase the debt…) and additionally won’t be used to cure cancer or put a person on Mars (unless, of course, your work directly attacks one of these problems). My opinion is that when you ask for this level of funding, the request had better be backed up by a damn compelling vision for how the world will be different if the money is awarded.

The vision part of a proposal is tricky. It has to be concise — probably around a page — and compelling, while remaining strongly connected to the proposed work. You can’t spin a story about ending disease and poverty, then immediately segue into open problems of the 5-bit frobnicator. The best vision parts of proposals leave panelists feeling that they have a mandate: the problem is so important that the work has to be funded no matter what. I’ve read proposals like this, they’re rare and wonderful. They always get funded.

In summary, the purpose of this part of a proposal is to convince your review panel that the problem that you are attacking matters, that it needs to be solved. It must not make people bored or irritated.

Plan

Here’s the really hard part: you have to convince the proposal review panel that you have an approach: a plan of attack for your chosen research problem. You must share your ideas, insights, and early results in order to paint a picture of a plausible research program that will come to fruition in 3-5 years. This is not the place to hold back on your most ingenious idea for fear of having it stolen. In general, it’s hard to find a piece of work that is plausibly doable, but that doesn’t sound like more of the same. Panels, in general, are not very interested in funding more of the same, they want new approaches.

One interesting question is: How much work should you do before writing a proposal? I’ve heard people say that you should basically have finished a piece of work before writing the proposal about it, and then you use that grant to fund the next piece of work. This seems pretty dishonest and in any case this approach would be highly risky if you’ve published the work. On the other hand, the reality is that it’s pretty tough to write a convincing proposal without having some kind of preliminary results. So I guess in the end the answer is pretty simple: you strike a balance that feels right.

Capability

A good proposal makes it clear that you are the best person in the world to do the work that you are proposing. You absolutely do not want a panelist thinking “Gee… that’s a neat idea, but who’s this schmuck? I bet Jones at MIT could knock off that project in six months.”

The capability to do research can be demonstrated in different ways. First, even if you’ve never received NSF support, you should briefly describe your previous accomplishments in the area of interest. Second, you might want to find an excellent (and probably more senior) collaborator who can be a co-PI on the proposal. Third, you can demonstrate capability by association, whether it is with equipment such as the huge cluster of Cray-27 nodes available only at your institution, or with people, such as your unique industrial contacts.

It can be daunting to realize that your competition may have been working in the field for 25 years, may run an operation with a budget 20 times larger than yours, and may have won every award the there is to get. In fact, these people often write incredibly polished proposals. However, it is highly likely that established people are working on established ideas. It is only the very best people in the field who keep doing extremely original work year after year, and you might as well just hope you don’t come up against them too often. In contrast, as a new person in the field you should have new ideas and approaches to bring to the table. Play this up in the proposal: it’s not just business as usual.

Other Issues

It’s important to make life easy for the people who will be evaluating your proposal. Assume yours is buried in the middle of a pile of about 20 proposals, and it will be read (probably on a plane to DC) by an exhausted person who would rather be doing something different. To make things easy for reviewers, keep it simple. Don’t assume everyone is interested in the deepest mathematics that you are capable of writing about. Use plenty of figures and graphs to break up the flow of text. Since some panelists like to skip around, try to make parts of the proposal as self-contained as possible and use informative section titles. Don’t refer to the previous work as nascent or vacuous since probably that work’s authors are on the panel.

Many of your colleagues have served on NSF panels. Get them to read your proposal and give feedback. Unfortunately, this will require that the proposal not be coming together on the day of the deadline.

Getting a Grant

Ok, so far this post has actually been about how to write a proposal, now let’s talk about how to get a grant. Realistically, unless you are both very good and very lucky, you need to submit more proposals than you are really interested in writing. One of my colleagues (not jokingly, I suspect) talks about submitting them faster than they can be rejected. Writing lots of proposals is sort of a bummer since each one keeps you from doing something else important, like writing a paper. Also, you never, ever want to write a proposal for work that you don’t actually want to do, since of course that proposal would end up being the only one funded.

When a proposal is rejected there are several things to keep in mind. First, it may not have been reviewed by any experts in your exact field. Second, if you revise and resubmit the proposal, it will be evaluated by an entirely new panel. A friend of mine once had a proposal rejected, but with strong reviews — it was barely in the not-funded category. He diligently revised the proposal and the next (much better) version got only lukewarm reviews. Again, he revised and resubmitted and the third time around got quite negative reviews. This kind of experience is not terribly uncommon and for this reason a different friend of mine makes a point of never reading his NSF reviews at all (or so he claims).

Predicting Grad School Performance

I enjoy going out drinking with my colleagues, although it only seems to happen a few times a year. It should come as a surprise to nobody that professors are natural bullshitters and people always have good stories: nearly destroying a ticket booth at Alta while doing avalanche control work, barely sub-nuclear pyrotechnic displays out in the desert, skiing across Siberia during the good old days of the USSR, things like that. It is perhaps relevant that one of my colleagues, Erik Brunvand, is the son of the guy who popularized the term “urban legend.” We also tell work stories; these are slightly less colorful but this is one of my favorites:

At some point before my time at Utah, some professors got together and decided to bring a few principles to the chaotic process that is graduate admissions. The experiment worked like this.  First, the applications to graduate school from a number of previous years were gathered up and a spreadsheet was created containing every current graduate student in addition to all of the quantitative data from their applications.  This spreadsheet was sorted by various fields such as GRE score, undergraduate GPA, etc.  Then, the resulting lists of names were passed out to the faculty without telling them what the sorting keys were. Each professor was asked to choose which list or lists best sorted the graduate students by ability. Finally, the results were tallied up.  The most important predictor of grad student quality turned out to be whether the student arrived with a MS degree or not, followed closely by social security number and then phone number.  GRE and GPA were not predictors at all.

Can I Stop Lecturing Now?

In Fall 2010 I’ll teach my Advanced Embedded Systems course for the fifth time.  This is a class that I enjoy teaching and that students generally enjoy taking.  I created it from scratch and am fairly proud of it.  There’s just one problem: I’m ready to stop giving the lectures.  First, although I tweak and improve them each year, I’m getting somewhat bored with them.  Second — and much more important — I’ve become convinced that (in this case, not necessarily in general) lectures aren’t the best use of class time.  For one thing, even when students are actively engaged with the material, it’s hard for them to sit still and listen to someone for 75 minutes.  For another, the thing I really want students to get from the class is a taste of real-world embedded systems design and implementation.  They should be learning skills such as working in a team, understanding requirements, evaluating software architectures and other implementation choices, creating maintainable code, debugging in challenging environments, reviewing code, and figuring out how to test complex combinations of hardware and software.

Here’s my idea.  First, while giving the lectures this fall, I’ll arrange for good-quality videos to be recorded, so I can put them online.  Second, after this fall, I’ll give few or no lectures but rather ask the students to watch at least some of them online, outside of class.  Then we’ll spend the class periods — typically two 75-minute sessions per week — working out the design and implementation of a more open-ended and aggressive collection of projects than I had previously given (note that there are 3 hours of lab time per week in addition to the lecture time).  Also, after teams get their projects to work, I’ll require them to integrate these projects with each other, for example using a serial line or CAN bus.  Previously, when I have required interaction between different teams’ projects, this has turned out to be far more challenging than the students anticipated, even for relatively simple projects such as distributed time synchronization or a consensus protocol.  Also, I’d like to spend a fair amount of time in class doing code reviews.  It turns out that for a variety of reasons, such as the fact that they always get to throw away code after writing it, a lot of junior/senior-level CS and CE students are not exactly the kind of coders I’d hire for an ambitious embedded project.  Regular code reviews can help turn coders into real programmers. One final idea that I keep wanting to try, but haven’t yet, is to force teams to exchange code bases with each other partway through the semester.  This is basically evil and will probably cause me to take a hit on the teaching evaluations, but I think it is guaranteed to be a character-building experience.

Anyway, it’s possible that this experiment will make my course more awesome, or it may tank miserably.  I’d love to hear feedback and advice from readers, either in the comments here or in email. In a couple years I’ll let you all know how it went.

Book Review: Sustainable Energy — Without the Hot Air

The premises are simple.  First, energy consumption must be met by energy production.  Second, use of fossil fuels is unsustainable.  Third, no magical technological fix to the energy problem is going to arrive.  Finally, to understand a sustainable future, we must think quantitatively.  That is, to proceed with a debate about, for example, wind or solar without a firm understanding of the magnitudes involved is foolishness.  This last premise is where Sustainable Energy departs significantly from the vast majority of information available in the popular press, which trumpets dubious oversimplifications like “hydrogen is good” or “unplug your phone charger when not being used” without providing the larger context.

The central question this book deals with is: If we must stop using fossil fuels, can we keep expending the amount of energy that we currently do, or will our standard of living necessarily contract?  To attack this question, the first part of the book alternately describes the components of energy demand and potential alternative sources such as tides, wind, and solar.  Impressively, MacKay manages to create a certain amount of tension between the supply and demand sides of the equation, making this initial part of the book more entertaining than it perhaps sounds.

The second part investigates sustainable energy in more detail.  How much can we save by mass transit?  Electric cars?  Better insulation?  On the production side, what can we do with nuclear power?  With solar energy from Africa’s vast deserts?  The capstone of Sustainable Energy is a collection of five energy plans for Britain, each making different tradeoffs between nuclear or not, domestic power vs. imported, etc.  The message is clear: we can live without fossil fuels, but only at considerable expense and inconvenience.

MacKay’s presentation of the material is clever.  He writes in an engaging way, including plenty of personal opinions and anecdotes.  The graphs and figures are first-rate.  Person-sized units of power such as kWh per day help us understand energy at a level where we can each make a difference, as opposed to the more remote view promoted by nation-sized units like GWh per day.  Facts are backed up by simple, intuitive physical explanations that will appeal strongly to scientific-minded readers.  Finally, MacKay includes a lot of specific recommendations for ways that individuals can make a difference in their own energy consumption, often supported by data and stories from his own life.

One could easily quibble with some of the specific quantities used in the book, such as the average wind speed in Scotland or the amount of unmined uranium in Africa.  However, to do this is to miss the point.  MacKay isn’t so much providing a collection of facts as a framework for thinking about energy supply and demand.  It is a certainty that over the next few years estimates of energy reserves will be refined and technologies like solar panels and batteries will improve a bit faster or slower than expected.  This new information may alter the nuances of MacKay’s message, but the fundamentals are so large that no invention short of practical fusion will render the book obsolete.  Similarly, an American reader could become irritated with the UK-centric nature of the writing, but again this misses the point: from the American point of view the major difference is that in the UK they use far less energy per person than we do.  Also, towards the end of the book MacKay specifically addresses the question of adapting his energy plans for the USA.

In summary, entering the energy debate without reading this book is like debating Christian theology without having read the Bible.  MacKay has created a work that is enjoyable to read and also provides a framework for understanding and discussing one of the most important problems faced by the developed world in the next 50 years.  An electronic copy of the entire book can be downloaded for free.

C Compilers Disprove Fermat’s Last Theorem

[Update: I wrote another post on this topic that may explain the underlying issues more clearly.]

Obviously I’m not serious: compilers are bad at solving high-level math problems and also there is good reason to believe this theorem cannot be disproved. But I’m getting ahead of myself. Recently — for reasons that do not matter here — I wanted to write C code for an infinite loop, but where the compiler was not to understand that the loop was infinite.  In contrast, if I had merely written

  while (1) { }

or

  for (;;) { }

most optimizing compilers would see the loop’s inability to exit and generate code accordingly. For example, given this C code:

  void foo (void)
  {
    for (;;) { }
    open_pod_bay_doors();
  }

Most compilers will emit something like this:

  foo:
    L2: jmp L2

In this case the compiler emits neither the call to open_pod_bay_doors() nor the final return instruction because both are provably not executed.

Perhaps interestingly, LLVM/Clang recognizes that this slightly obfuscated infinite loop never exits:

  unsigned int i = 0;
  do {
    i+=2;
  } while (0==(i&1));

Faced with a loop optimizer that has some brains, I decided to stop messing around and wrote a loop that should thwart any compiler’s termination analysis:

  const int MAX = 1000;
  int a=1,b=1,c=1;
  while ((c*c*c) != ((a*a*a)+(b*b*b))) {
    a++;
    if (a>MAX) {
      a=1;
      b++;
    }
    if (b>MAX) {
      b=1;
      c++;
    }
    if (c>MAX) {
      c=1;
    }
  }

This loop only terminates if it finds a counterexample to a special case of Fermat’s Last Theorem.  Fermat’s Last Theorem, of course, states that no solution exists for the equation an + bn = cn for positive integers a, b, and c and for integer n>2. Here n=3 and a,b,c are in the range [1..1000]. On a platform with 32-bit integers 1000 is a reasonable maximum because 2*10003 is not much less than 231.

It turns out that when optimizations are enabled, several compilers (LLVM/Clang 2.7, Open64-x86 4.2.3, Sun CC 5.10, and Intel CC 11.1.072) go ahead and permit this loop to terminate.  Specifically, when the loop is enclosed in a function, the compilers emit x86 assembly which looks something like this:

  fermat:
    ret

The implication, of course, is that the compiler has disproved Fermat’s Last Theorem.  Faced with this incredible mathematical discovery, I held my breath and added a line of code at the end of the function to print the counterexample: the values of a, b, and c.  Unfortunately, with their bluffs called in this fashion, all of the compilers emitted code that actually performed the requested computation, which of course does not terminate.  I got the feeling that these tools — like Fermat himself — had not enough room in the margin to explain their reasoning.

What is really going on here is that compiler optimizations and termination analysis have long been at odds.  In other words, if compilers were obligated to preserve the termination properties of the code they translate, they would be unable to perform (or would have more difficulty performing) some of the optimizations that they use to create efficient code.  A choice is being made by compiler developers — probably consciously, though it’s hard to be sure — to prefer speed over correctness. The news, however, is not all bad: Microsoft’s C compiler, the Wind River Diab C compiler, and several versions of GCC all did the right thing, changing the termination properties of none of the examples I tried.

Update from Sat 5/1: It turns out the LLVM folks have been working on this problem lately and their latest SVN now does not contain this bug.  Very nice!

Update from Sat 5/1: Someone on Reddit noticed (and one of my students confirmed) that the Microsoft compilers do have termination bugs.  The compilers in both Visual Studio 2008 and 2010 generate code for the Fermat function, but then calls to this function are dropped because it is believed to be free of side effects (this was exactly what LLVM did before they fixed the problem).

Update from Friday 4/30

I’ll try to clarify a few of the questions that have come up on Reddit and in the comments here.  Also I fixed a mistake in the statement of Fermat’s Last Theorem that someone on Reddit pointed out.  Thanks!

Q: Does this actually matter at all?

A: Yes, but in very specialized situations that usually only come up when developing embedded software.  One example is described here: the poster wants the program being simulated to hang when main() exits, but LLVM deletes the loop that was intended to hang up the processor.  The workaround was to compile the code with optimizations turned off.  Another example happens when an embedded system has updated its firmware and wants to do nothing until the watchdog timer reboots the processor into the new version.  It’s no coincidence that gcc and the Wind River C compiler — both of which are heavily used in the embedded world — get termination right.

Q: Since infinite loops are bad style, isn’t it OK for the compiler to terminate them?  Shouldn’t people be putting the CPU to sleep, blocking the running thread, or whatever?

A: First, not all programs have an operating system or even a threading system to call out to. Embedded software commonly runs on the bare metal.  Second, the meaning of a program is defined by the language standard and style has nothing to do with it.  See my earlier post The Compiler Doesn’t Care About Your Intent.

Q: Does the C standard permit/forbid the compiler to terminate infinite loops?

A: The compiler is given considerable freedom in how it implements the C program, but its output must have the same externally visible behavior that the program would have when interpreted by the “C abstract machine” that is described in the standard.  Many knowledgeable people (including me) read this as saying that the termination behavior of a program must not be changed.  Obviously some compiler writers disagree, or else don’t believe that it matters.  The fact that reasonable people disagree on the interpretation would seem to indicate that the C standard is flawed.  In contrast, the Java language definition is quite clear that infinite loops may not be terminated by the JVM.

Q: Are you saying the compiler should do termination analysis?  That’s impossible by trivial reduction to the halting problem.

A: Termination analysis does not need to be part of the compiler at all. However, I (and others) would claim that the compiler should perform a termination analysis of any useless loop before deleting it.  Although the general problem is not computable, many specific instances can be easily solved.

Q: Does the Fermat code in this post execute any signed integer overflows or other undefined behaviors?

A: I don’t believe so.

Update from Saturday 5/1

Q: Didn’t you know Fermat’s Last Theorem was proved in 1995?

A: I did know that.  Since I got my math degree in 1995, it would have been very difficult for me to miss this event :). I was making a weak joke and also referring to the fact that proofs, especially complicated ones, can contain errors. In fact, as someone noted in the comments, Wiles’ initial proof was wrong. Also note that the n=3 special case was proved much earlier, in 1770.

Q: What’s the best workaround if you really want an infinite loop in C?

A: As several people have pointed out, looping on a volatile-qualified variable is probably the best choice. But keep in mind that compilers don’t always respect volatile….

One more update from Saturday 5/1

Here’s a fun complete program that is more compelling than the code above because it explicitly uses a return value from the “theorem disproved” branch of the code:

int fermat (void)
{
  const int MAX = 1000;
  int a=1,b=1,c=1;
  while (1) {
    if (((a*a*a) == ((b*b*b)+(c*c*c)))) return 1;
    a++;
    if (a>MAX) {
      a=1;
      b++;
    }
    if (b>MAX) {
      b=1;
      c++;
    }      
    if (c>MAX) {
      c=1;
    }
  }
  return 0;
}

#include <stdio.h>

int main (void)
{
  if (fermat()) {
    printf ("Fermat's Last Theorem has been disproved.\n");
  } else {
    printf ("Fermat's Last Theorem has not been disproved.\n");
  }
  return 0;
}

Here’s what the Intel and Sun compilers have to say:

regehr@john-home:~$ icc fermat2.c -o fermat2
regehr@john-home:~$ ./fermat2
Fermat's Last Theorem has been disproved.
regehr@john-home:~$ suncc -O fermat2.c -o fermat2
"fermat2.c", line 20: warning: statement not reached
regehr@john-home:~$ ./fermat2
Fermat's Last Theorem has been disproved.

Open64-x86 and LLVM/Clang 2.7 have the same behavior.  Although plenty of folks in the peanut gallery disagree, it seems perfectly clear to me that this is a serious error in these compilers.  I mean, why return 1?  Is that worse or better than returning 0?  Neither result makes any sense.

Belated Introduction

When I started this blog I didn’t write an introductory post because, of course, I had no readers.  Lately Google Reader indicates that I have a bit over 100 subscribers so I thought it might be time for a quick introduction.

For a long time I was uninterested in blogging because I thought the only good blogs were highly focused ones, and I wasn’t interested in writing a blog like that.  Also, I wasn’t interested in taking on a new long-term time commitment since I already have enough of these.  Finally, the nature of my job (and life) means that I’ll likely post in a bursty fashion that would seem annoying to readers.

Obviously I eventually decided to start a blog.  The main triggering event was that I became extremely bored with the FSP blog (reading it is like going to a faculty meeting) which used to be perhaps my favorite blog and was the original source of my observation that a good blog should be focused.  In other words I decided that posting about random topics was OK and perhaps even interesting. Also, I realized that a blog doesn’t need to a be a long-term commitment: if I get tired of blogging (and I expect this will happen in some number of years) I’ll just drop it until it seems interesting again.  The advent of good RSS readers means that bursty posting is not a problem (my view of the web solidified in roughly 1996 so I was effectively the last person to become a user of these readers).  Finally, I’m on sabbatical this year and have been taking a bit more time to think about the big picture than I used to.  Thinking, for me, usually leads to the desire to write.  It’s a common flaw among academics.

Anyway, back to the intro.  I’m an associate processor of Computer Science at the University of Utah, coming up on the end of my first sabbatical.  The sabbatical arrived at about the right time: the tenure runup left me feeling a little burned out and ready to think about what kind of directions I’d like to take my career in, in the long run. I teach courses in computer systems, such as operating systems, embedded systems, and advanced compilers.  I enjoy teaching a lot and it only becomes a drag when students perform poorly or are apathetic. My research is broadly in computer systems and is specifically about compilers and embedded systems.  My work is usually centered on software tools, which are based on the idea that we can use software to create better software.  It’s a lot of fun and it’s a great thing to be doing from academia since in industry they’re generally very focused on the next product: they don’t take the time to create tools that can add a lot of value in the longer run. I supervise a group that currently has six students.  Running a group is fun for multiple reasons.  First, it’s neat to watch students change from their early, tentative selves into mature and confident researchers.  Second, the students (on a good day) act as amplifiers, getting far more work done than I could ever do alone. Most of the time I have the best job in the world.  The thing I like most is that I can choose the problems I work on.  If I want to focus for multiple years on something that is of little interest to most people (e.g. compiler bugs), I can do that.  If I wish to bounce among projects in a flighty way, I can do that too.  It’s liberating.

I’m extremely fortunate to have healthy and inquisitive children, Jonas is 5 and Isaac is 3.  It’s hard to imagine life without them.  My wife Sarah is also a professor at Utah.  She, I think, slacked off on her work a bit when the kids were little (out of necessity — Utah’s parental leave policy only solidified after the boys were born) but lately she has been making up for lost time and is working on tons of projects; her career seems to be progressing well.  She’ll be lucky if she can spend another 10-15 years doing good research before (I predict) being roped into some sort of administrative job.

This summer we’ll have lived in Utah for 10 years, and it has been great.  Salt Lake City itself is OK, probably not much different from Indianapolis or Kansas City, but the setting is amazing.  Mountains can be found in all directions, and in fact the state contains entire mountain ranges that few people have heard of (The Wah Wahs?  The Deep Creeks?  The Silver Island Range?).  Just east of the city, and right outside my front door, is the Wasatch Range, climbing 7000′ above the city.  A bit further away are the red rock deserts including the incredible San Rafael Swell which is barely close enough to visit on a day trip, but really much better suited to 3-4 day car camping trips. Utah’s vast West Desert fills around a third of the state, in this area you can camp out for days and drive hundreds of miles on dirt roads without seeing anyone.  My list of trips to take in Utah never gets shorter since every time I take a trip, I see a half dozen more places to go.  My hiking buddy (and local guidebook author) John Veranth has spent hugely more time outdoors in Utah than I have, and says the same thing is true for him too.

Anyway, to sum up: this blog is probably going to be active for a finite time.  Posting will be bursty and on totally random topics :).

Into the Brooks Range, Part 3

[Continued from Part 1 and Part 2.]

August 6 — We See Bears

Finally we were back to walking a wide river valley not unlike our first day hiking. To stay in the river bed, we had to pass through some dense thickets of willow brush. Since it’s very bad to surprise a brown bear, we made lots of noise. Later, while walking along the riverbank, Eric heard something and motioned for us to stop. Down below on the gravel, a big female bear was standing up on hind legs, making warning noises: we had violated her personal space, which in the arctic encompasses a much larger area than for example in southeast Alaska where the bear density is higher. She calmed down after we stopped coming nearer, and we saw that she had three cubs nearby. Eventually, they wandered off into the willow scrub and we moved on. Later, we camped in a really pretty site up on the river bank.

I hadn’t spent time with brown bears before this trip, and it was interesting to do so. Most of the time, of course, we were managing bear risk as opposed to dealing with actual bears. For example, we carried a separate cook tent and never stored food in our tents overnight. We each had a can of pepper spray at hand basically at all times. Statistically speaking, this stuff is supposedly more effective than carrying a firearm, and certainly it poses less risk to the carrier (though there always seemed to be the possibility of being forced to discharge it into the wind). Even Eric, who has extensive bear experience, was hard-pressed to explain how one might distinguish a mock charge from a real charge before it was too late. A few times we joked that if the bears had their act together, they’d deploy one in front while another snuck up behind us. However, fundamentally, this kind of tactic is unnecessary since a single large bear could easily kill all of a five-person group like ours. However, the Brooks Range bears are not at all habituated to humans; their suspicion about the new shapes and smells causes them to back off more often than not, and attacks are rare (though not unheard of).

[nggallery id=8]

All photos © William B. Thompson or John Regehr 2009.

August 7 — A Warm Day and a Swim

This was a sunny, warm day with generally easy walking. The Ivishak was finally deep enough to contain plausible fishing holes — Ben had carried his fly rod the whole trip waiting for this. But no luck, it was too early for the arctic char sea run. One excellent location had deep, clear water and Ben, Eric, and I couldn’t resist a quick dip to wash off a week’s worth of grime and sweat. I’d guess the water was around 50-55 degrees: cold enough to trigger the gasp and hyperventilation reflexes, but not producing a strong feeling of impending cardiac arrest.

In the evening we found a gorgeous campsite on the river bank and Ben fished again. Around 11:30 Eric started yelling for Ben to come up to camp: a bear was prowling around on the opposite bank. We watched it foraging for a while: it was acting natural and hadn’t heard us over the river noise. Before turning in we banged some pots and pans to make sure it knew we were there: this got its attention right away and it stood on hind legs to try to figure us out. It lost interest quickly and wandered off, but even so most of us too a pot or pan to bed that night as a noise-maker in case he came back to investigate further. As far as we know, he didn’t come back.

Throughout the trip, everyone else did a better job than I did in spotting animals; my vision is about 20/50 and I decided not to wear corrective glasses most of the time. Also, as Sarah enjoys pointing out, I’m not the most observant person in the world. Eric on the other hand has 20/15 vision and his job depends on spotting wildlife in difficult conditions. Throughout the trip we were seeing plenty of caribou and raptors plus a single moose; these sightings quickly became routine and I’m only mentioning the more interesting ones.

[nggallery id=9]

All photos © William B. Thompson or John Regehr 2009.

August 8 — Last Day Walking and a Wedding

Our last walking day was cloudy and cool. The steep valley walls made it best to stick to the gravel bars and we spent most of the day in sandals. The frequent river crossings were uncomfortably cold. Also, as more side drainages added water to the Ivishak, and as it rained around us, the crossings got deeper. They weren’t scary but certainly we had to focus on maintaining our footing in the current. By the end of the day, crossing the main channel would have been dicey.

Finally we arrived at the big alluvial fan containing the takeout air strip. Although we were certain the location was correct (Shannon had been there before, as the starting point of a rafting trip) we had no luck finding any wheel tracks. Shannon went out and put a makeshift windsock on the part of the fan where she thought Kirk would land.

In the evening we had a fun surprise: Shannon and Ben had decided to get married. They asked Eric if he would marry them, and he was happy to (an adult Alaska resident can officiate at a wedding in the state). It was a nice ceremony in the most beautiful possible setting. Afterwards, we had drinks — sort of. Ben had stashed a mini bottle of gin that we mixed up with fizzy electrolyte drink tablets.

Shannon and Ben are a neat couple. They live in a cabin near Denali NP. They do various kinds of work such as guiding in the summer and working in Antarctica in winter. It sounds like an interesting life and I like to secretly think that in some alternate universe I’d have done this kind of thing instead of, or at least prior to, becoming a professional academic.

Overnight, a front rolled through and we had hours of high winds mixed with rain and sleet. We were fortunate to have set up camp in the lee of a small rock outcrop, but even so the biggest gusts brought my tent ceiling more than halfway down to my head. For a while I was pretty sure the tent would collapse or else go airborne. However, it did not, perhaps because I had added three extra guy lines. Nobody slept much and in fact around midnight we found ourselves all outdoors in the miserable driving rain putting extra-large rocks on our tent stakes. Ben and Shannon’s tent had partially blown down and they had to realign it; Bill and Eric had pretty solid tents and I — having probably the least weather-worthy tent — was very lucky to have set it up the right way.

[nggallery id=10]

All photos © William B. Thompson or John Regehr 2009.

August 9 — In a Snowstorm and Not Getting Out

In the morning the winds had died and we found the snow line barely above camp. The cloud level was only a few hundred feet higher. Still, the weather was improving and we hoped the plane could make it in. Eric and I took a short hike, but we didn’t want to wander far in case Kirk arrived.

As the day progressed the weather deteriorated and we realized we were almost certainly in for an extra night. We moved the tents into a slightly more sheltered configuration in case the winds picked up. In the afternoon it began to snow pretty hard and we spent the rest of the day chatting in the cook tent and napping. We had little reserve food and had an extremely light dinner before going to bed hungry.

During the night it kept snowing. My light tent let in every little gust of wind and I started to get cold. As part of a weight-saving plan I brought only a 30 degree sleeping bag, knowing that it would make hiking easier but that I would suffer if things went badly. So I shivered, wearing basically every single piece of clothing I had brought along, including the fleece top that had been serving as a pillow.

[nggallery id=11]

All photos © William B. Thompson or John Regehr 2009.

August 10 — Snow and Sun and Out

We woke to perhaps six inches of snow, which represented a new obstacle to getting out: the bush pilot can’t land if he can’t see the terrain. Someone told a story of a bush pilot who overflew his clients after a snowstorm and dropped them a shovel, circling while they cleared the strip. With the threat of a second extra night out, we rationed food pretty severely and stayed hungry.

As the day progressed it partially cleared and the snow began to burn off. It was incredibly pretty, definitely worth the discomfort and inconvenience. Sometime in the morning we heard a plane and rushed to take down tents — but the plane passed overhead. The rest of the day we read and napped, not wanting to stray far from the air strip. By late afternoon we were resigned to another night, but then around 6:30 Kirk showed up. We packed up and ran for the plane, not wanting to keep him there any longer than we had to. The flight out to Arctic Village was spectacular, with clear air this time.

It turned out Kirk had tried hard to get us out the previous day, but had been turned back by severe turbulence. His brother had also tried, from a different direction, also unsuccessfully. This was something interesting to learn about bush pilots: their clients’ lives are in their hands and they take this responsibility very seriously. This, in combination with the levels of skill and experience of the best pilots, helped put the cost of bush plane travel into perspective (it constituted one of the major parts of the total trip expense).

At Arctic Village it was clear that we weren’t going any further. An old guy with an ATV gave Eric and me a ride into town, where by some stroke a luck the store was still open. We stocked up on high-calorie foods and walked back to the air strip to wait for Ben and Shannon. When they arrived, we ate a huge amount of spaghetti and candy bars. Unfortunately, the little visitor center was locked up, so we slept on its covered porch. I burned a bit of sat phone time to tell Sarah all was well, luckily she had been adequately briefed on the possibility we’d be stranded out.

[nggallery id=12]

All photos © William B. Thompson or John Regehr 2009.

August 11 — Heading Home

Wright Air Service was aware of our situation and sent an extra plane up to Arctic Village, putting us in Fairbanks by noon. Now we were two days late and Bill had missed his flights home. My parents were in Fairbanks with a car and Eric and I planned to ride down to Anchorage with them. However, Eric was stressed about work and stuff and flew home instead. I hadn’t yet missed my scheduled flight, a redeye late on the 11th, so we had a leisurely drive south. Luckily I had taken a shower in Bill’s hotel room or else we’d probably have driven with the windows open the whole way.

[nggallery id=13]

All photos © William B. Thompson or John Regehr 2009.

Notes

All pictures are by me or Bill Thompson. If you care, Bill’s pictures all have filenames starting with “dsc” whereas mine all start with “picture.” I shot a number of panoramas too.  We used identical equipment: Nikon D90 with the kit lens, an 18-105 zoom. Bill’s landscape pictures often look better than mine (at least partly) because he shot in raw format and then used a good converter, whereas I let the camera do everything.

Most of my gear performed admirably; the fact is that today’s mid-range equipment is overkill for almost any normal use. My favorite items were the Smartwool midweight wool long underwear top and bottom that became more or less a second skin for the cold parts of this trip. Completely comfortable, and not stinky like synthetic. My puffy synthetic Patagonia jacket was really nice as well, and way too warm for anything except sitting around or sleeping. The Arcteryx Bora 95 pack was awesome: big and bombproof. I have no idea when mine was made, I picked it up used from a guy who was moving away from SLC. Like all Arcteryx products, these are more or less prohibitively expensive to buy new. The La Sportiva Trango Trek GTX boots I took were about perfect: decent support, good waterproofness, and not unbearably heavy. After over a week in very tough country they look and work about like new. To carry water I took a single 1-liter platypus bottle, these are super light and can be collapsed to pocket-size when empty. Probably the main drawbacks are the easily-dropped cap and the narrow opening which makes it slow to fill. My old Cascade Designs Z-Rest performed about as well as it ever does, which is to say that it’s light and respectably comfortable, but really wants to be placed on soft ground.

A few items I was less happy with included the 30 degree Marmot synthetic sleeping bag that I got for cheap, which had very little loft. It weighed about the same as Shannon’s 5 degree bag from Western Mountaineering, which had so much loft it looked inflated. Seriously, you can bounce a quarter off that kind of bag. My Sierra Designs Sirius 2 tent was decent overall, but the open vestibules were a major drawback. First, they provided very little shelter for keeping items dry outside the tent. Second, they acted like wings to catch the wind: not good. Also, this tent is pretty short; I’m six feet and had to lie diagonally to keep from pressing head and feet against the tent ends.

Although I spend a lot of time outdoors and car camp as frequently as possible, my previous backpacking experience was minimal — probably no more than two weeks total, prior to this trip. So it was fun to refine my techniques and learn new tricks. One of the most useful was rigging a clothesline inside the tent so that socks and such could be dried overnight. Another good one was putting clothing and other items at the head and foot of my sleeping bag to keep it from getting wet from condensation due to touching the outside of the tent. A great pair of camp shoes can be improvised out of a pair of tevas and a pair of neoprene socks.