## Flower Power

The Wasatch Range peaks are 7000′ higher than the nearby Salt Lake Valley. This has many nice side effects but one of my favorites is that a wide variety of micro-climates is available within a small geographical region. In late Fall or early Spring it can be calmly drizzling in the city, but in the mountains it’s storming like the Himalayas. Before having kids I’d often get up around 5am in July and August to go for a hike. Even on days that are going to be over 100 degrees in the valley, it’s generally pretty chilly at 8000′ at that time of day.

This week the foothills near my house have the most flowers in the 6000′ to 7000′ range. Below this things are starting to dry out; higher up, the snow has only recently melted and buds are still trying to open. As the summer progresses, the band where flowers are found will slowly increase in elevation. The nice thing about this arrangement is that for about four months of the year, there’s somewhere within about a 45 minute drive that has wildflowers at their peak. The density of flowers in the foothills doesn’t approach the wall-to-wall color seen in some of the real alpine meadows, but some of these photos came out pretty well.

[nggallery id=14]

## Compilers and Termination Revisited

My earlier post C compilers Disprove Fermat’s Last Theorem generated a good amount of discussion both here and on Reddit.  Unfortunately, the discussion was riddled with misunderstandings. Some of this was because the topic is subtle, but some was my fault: the post was intended to be lightweight and I failed to explain the underlying issues well at all.  This post gives a more complete picture.

Our goal is to determine the meaning of this C program:

```#include <stdio.h>

int fermat (void) {
const int MAX = 1000;
int a=1,b=1,c=1;
while (1) {
if (((a*a*a) == ((b*b*b)+(c*c*c)))) return 1;
a++;
if (a>MAX) {
a=1;
b++;
}
if (b>MAX) {
b=1;
c++;
}
if (c>MAX) {
c=1;
}
}
return 0;
}```
```int main (void) {
if (fermat()) {
printf ("Fermat's Last Theorem has been disproved.\n");
} else {
printf ("Fermat's Last Theorem has not been disproved.\n");
}
return 0;
}```

This program is a simple counterexample search; it terminates if it is able to disprove a special case of Fermat’s Last Theorem.  Since this theorem is generally believed to be true, we would expect a counterexample search to run forever.  On the other hand, commonly available C compilers emit terminating code:

```regehr@john-home:~\$ icc fermat2.c -o fermat2
regehr@john-home:~\$ ./fermat2
Fermat's Last Theorem has been disproved.

regehr@john-home:~\$ suncc -O fermat2.c -o fermat2
"fermat2.c", line 20: warning: statement not reached
regehr@john-home:~\$ ./fermat2
Fermat's Last Theorem has been disproved.```

Before proceeding, let’s clarify a few things.  First, I am not asking this question:

How could I change this code so that the compiler will respect its termination characteristics?

This is easy and there are several reliable ways to do it, for example using volatile variables or inline assembly.

Second, I am not interested in “answers” like this:

It is obvious what is going on here: the compiler sees that one path out of the function is dead, and then deduces that the only remaining path must be live.

This observation is not untrue, but it’s a little like explaining that World War II happened because people couldn’t all just get along.  It completely fails to get at the heart of the matter.

Third, there are no integer overflow games going on here, as long as the code is compiled for a platform where an int is at least 32 bits. This is easy to see by inspecting the program. The termination problems are totally unrelated to integer overflow.

# Program Semantics

A program’s meaning is determined by the semantics of the language in which it is written.  The semantics tells us how to interpret constructs in the language (how wide is an integer? how does the “if” operator work?)  and how to put the results of operations together into an overall result.  Some computer languages have a formal mathematical semantics, some have a standards document, and some simply have a reference implementation.

Let’s look at a few examples.  I’ll continue to use C, and will be quite informal (for a formal take on the meaning of C programs, see Michael Norrish’s PhD thesis).  To keep things simple, we’ll look only at “self-contained” programs that take no inputs.  Consider this program:

```int main (void) {
return 3;
}```

it means {{3,””}}.  The notation is slightly cluttered but can be read as “the program has a unique interpretation which is to return 3 and perform no side effects.”  To keep things simple, I’m representing side effects as a string printed to stdout.  Of course, in the general case there are other kinds of side effects.

Here’s a slightly more complex program:

```int main (void) {
int a = 1;
return 2 + a;
}```

it also means {{3,””}} since there is no possibility of integer overflow.

Not all programs have a unique meaning.  Consider:

```int main (void) {
unsigned short a = 65535;
return a + 1;
}```

The meaning of this program is {{65536,””}, {0,””}}.  In other words, it has two meanings: it may return 65536 or 0 (in both cases performing no side-effecting operations) depending on whether the particular C implementation being used has defined the size of an unsigned short to be 16 bits or to be larger than 16 bits.

Another way that a C program can gain multiple meanings is by performing operations with unspecified behavior.  Unlike implementation defined behavior, where the implementation is forced to document its choice of behavior and use it consistently, unspecified behavior can change even within execution of a single program.  For example:

`int a;`
```int assign_a (int val) {
a = val;
return val;
}```
```int main (void) {
assign_a (0) + assign_a (1);
return a;
}```

Because the order of evaluation of the subexpressions in C is unspecified, this program means {{0,””}, {1,””}}.  That is, it may return either 0 or 1.

This C program:

`#include <stdio.h>`
```int main (void) {
return printf ("hi\n");
}```

means {{0,””}, {1,”h”}, {2,”hi”}, {3,”hi\n”}, {-1,””}, {-2,””}, {-3,””}, …}.  The 4th element of this set, with return value 3, is the one we expect to see.  The 1st through 3rd elements indicate cases where the I/O subsystem truncated the string.  The 5th and subsequent elements indicate cases where the printf() call failed; the standard mandates that a negative value is returned in this case, but does not say which one.  Here it starts to become apparent why reasoning about real C programs is not so easy.  In subsequent examples we’ll ignore program behaviors where printf() has something other than the expected result.

Some programs, such as this one, don’t mean anything:

`#include <limits.h>`
```int main (void) {
return INT_MAX+1;
}```

In C, overflowing a signed integer has undefined behavior, and a program that does this has no meaning at all.  It is ill-formed. We’ll denote the meaning of this program as {{UNDEF}}.

It’s important to realize that performing an undefined operation has unbounded consequences on the program semantics.  For example, this program:

`#include <limits.h>`
```int main (void) {
INT_MAX+1;
return 0;
}```

also means {{UNDEF}}. The fact that the result of the addition is not used is irrelevant: operations with undefined behavior are program cancer and poison the entire execution. Many real programs are undefined only sometimes.  For example we can slightly modify an earlier example like this:

`int a;`
```int assign_a (int val) {
a = val;
return val;
}```
```int main (void) {
assign_a (0) + assign_a (1);
return 2/a;
}```

This program means {{UNDEF}, {2,””}}.  Showing that a real C program has well-defined behavior in all possible executions is very difficult.  This, combined with the fact that undefined behavior often goes unnoticed for some time, explains why so many C programs contain security vulnerabilities such as buffer overflows, integer overflows, etc.

One might ask: Is a C program that executes an operation with undefined behavior guaranteed to perform any side effects which precede the undefined operation?  That is, if we access some device registers and then divide by zero, will the accesses happen?  I believe the answer is that the entire execution is poisoned, not just the parts of the execution that follow the undefined operation.  Certainly this is the observed behavior of C implementations (for example, content buffered to stdout is not generally printed when the program segfaults).

Finally we’re ready to talk about termination.  All examples shown so far have been terminating programs.  In contrast, this example does not terminate:

`#include <stdio.h>`
```int main (void) {
printf ("Hello\n");
while (1) { }
printf ("World\n");
return 0;
}```

Clearly we cannot find an integer return value for this program since its return statement is unreachable.  The C “abstract machine,” the notional C interpreter defined in the standard, has an unambiguous behavior when running this program: it prints Hello and then hangs forever.  When a program behaves like this we’ll say that its meaning is {{⊥,”Hello\n”}}.  Here ⊥ (pronounced “bottom”) is simply a value outside the set of integers that we can read as indicating a non-terminating execution.

Assuming that signed integers can encode values up to two billion (this is true on all C implementations for 32- and 64-bit platforms that I know of), the semantics that the abstract C machine gives to the Fermat program at the top of this post is {{⊥,””}}.  As we have seen, a number of production-quality C compilers have a different interpretation.  We’re almost ready to get to the bottom of the mystery but first let’s look at how some other programming languages handle non-terminating executions.

# Java

Section 17.4.9 of the Java Language Specification (3rd edition) specifically addresses the question of non-terminating executions, assigning the expected {{⊥,””}} semantics to a straightforward Java translation of the Fermat code. Perhaps the most interesting thing about this part of the Java Language Specification is the amount of text it requires to explain the desired behavior.  First, a special “hang” behavior is defined for the specific case where code executes forever without performing observable operations.  Second, care is taken to ensure that an optimizing compiler does not move observable behaviors around a hang behavior.

# C++

C++0x, like Java, singles out the case where code executes indefinitely without performing any side effecting operations.  However, the interpretation of this code is totally different: it is an undefined behavior.  Thus, the semantics of the Fermat code above in C++0x is {{UNDEF}}.  In other words, from the point of view of the language semantics, a loop of this form is no better than an out-of-bounds array access or use-after-free of a heap cell. This somewhat amazing fact can be seen in the following text from Section 6.5.0 of the draft standard (I’m using N3090):

A loop that, outside of the for-init-statement in the case of a for statement,

• makes no calls to library I/O functions, and
• does not access or modify volatile objects, and
• performs no synchronization operations (1.10) or atomic operations (Clause 29)

may be assumed by the implementation to terminate.

[ Note: This is intended to allow compiler transformations, such as removal of empty loops, even when termination cannot be proven. —end note ]

Unfortunately, the words “undefined behavior” are not used.  However, anytime the standard says “the compiler may assume P,” it is implied that a program which has the property not-P has undefined semantics.

Notice that in C++, modifying a global (or local) variable is not a side-effecting operation.  Only actions in the list above count.  Thus, there would seem to be a strong possibility that real programmers are going to get burned by this problem.  A corollary is that it is completely clear that a C++ implementation may claim to have disproved Fermat’s Last Theorem when it executes my code.

We can ask ourselves: Do we want a programming language that has these semantics?  I don’t, and I’ll tell you what: if you are a C++ user and you think this behavior is wrong, leave a comment at the bottom of this post or send me an email.  If I get 50 such responses, I’ll formally request that the C++ Standard committee revisit this issue.  I haven’t done this before, but in an email conversation Hans Boehm (who is on the C++ committee) told me:

If you want the committee to revisit this, all you have to do is to find someone to add it as a national body comment.  That’s probably quite easy.  But I’m not sure enough has changed since the original discussion that it would be useful.

Anyway, let me know.

Haskell has a bottom type that is a subtype of every other type.  Bottom is a type for functions which do not return a value; it corresponds to an error condition or non-termination.  Interestingly, Haskell fails to distinguish between the error and non-terminating cases: this can be seen as trading diagnostic power for speed.  That is, because errors and infinite loops are equivalent, the compiler is free to perform various transformations that, for example, print a different error message than one might have expected.  Haskell users (I’m not one) appear to be happy with this and in practice Haskell implementations appear to produce perfectly good error messages.

# Other Languages

Most programming languages have no explicit discussion of termination and non-termination in their standards / definitions.  In general, we can probably read into this that a language implementation can be expected to preserve the apparent termination characteristics of its inputs. Rupak Majumdar pointed me to this nice writeup about an interesting interaction between a non-terminating loop and the SML type system.

# C

Ok, let’s talk about termination in C.  I’ve saved this for last not so much to build dramatic tension as because the situation is murky.  As we saw above, the reality is that many compilers will go ahead and generate terminating object code for C source code that is non-terminating at the level of the abstract machine.  We also already saw that this is OK in C++0x and not OK in Java.

The relevant part of the C standard (I’m using N1124) is found in 5.1.2.3:

The least requirements on a conforming implementation are:

• At sequence points, volatile objects are stable in the sense that previous accesses are complete and subsequent accesses have not yet occurred.
• At program termination, all data written into files shall be identical to the result that execution of the program according to the abstract semantics would have produced.
• The input and output dynamics of interactive devices shall take place as specified in 7.19.3. The intent of these requirements is that unbuffered or line-buffered output appear as soon as possible, to ensure that prompting messages actually appear prior to a program waiting for input.

Now we ask: Given the Fermat program at the top of this post, is icc or suncc meeting these least requirements?  The first requirement is trivially met since the program contains no volatile objects.  The third requirement is met; nothing surprising relating to termination is found in 7.19.3.  The second requirement is the tricky one.  If it is talking about termination of the program running on the abstract machine, then it is vacuously met because our program does not terminate.  If it is talking about termination of the actual program generated by the compiler, then the C implementation is buggy because the data written into files (stdout is a file) differs from the data written by the abstract machine.  (This reading is due to Hans Boehm; I had failed to tease this subtlety out of the standard.)

So there you have it: the compiler vendors are reading the standard one way, and others (like me) read it the other way.  It’s pretty clear that the standard is flawed: it should, like C++ or Java, be unambiguous about whether this behavior is permitted.

# Does It Matter if the Compiler Terminates an Infinite Loop?

Yes, it matters, but only in fairly specialized circumstances.  Here are a few examples.

The Fermat program is a simple counterexample search.  A more realistic example would test a more interesting conjecture, such as whether a program contains a bug or whether a possibly-prime number has a factorization.  If I happen to write a counterexample search that fails to contain side-effecting operations, a C++0x implementation can do anything it chooses with my code.

Linux 2.6.0 contains this code:

```NORET_TYPE void panic(const char * fmt, ...)  {
... do stuff ...
for (;;)
;
}```

If the compiler optimizes this function so that it returns, some random code will get executed.  Luckily, gcc is not one of the compilers that is known to terminate infinite loops.  (Michal Nazarewicz found this example.)

In embedded software I’ll sometimes write a deliberate infinite loop.  For example to hang up the CPU if main() returns.  A group using LLVM for compiling embedded code ran into exactly that problem, causing random code to run.

When re-flashing an embedded system with a new code image, it would not be uncommon to hang the processor in an infinite loop waiting for a watchdog timer to reboot the processor into the new code.

Another plausible bit of code from an embedded system is:

```while (1) {
#ifdef FEATURE_ONE
do_feature_one();
#endif
#ifdef FEATURE_TWO
do_feature_two();
#endif
#ifdef FEATURE_THREE
do_feature_three();
#endif
}
fputs("Internal error\n", stderr);```

If you compile this code for a product that contains none of the three optional features, the compiler might terminate my loop and cause the error code to run. (This code is from Keith Thompson.)

Finally, if I accidentally write an infinite loop, I’d prefer my program to hang so I can use a debugger to find the problem.  If the compiler deletes the loop and also computes a nonsensical result, as in the Fermat example, I have no easy way to find the latent error in my system.

# Are Termination-Preserving Compilers Uneconomical?

The C and C++ languages have undefined behavior when a signed integer overflows.  Java mandates two’s complement behavior.  Java’s stronger semantics have a real cost for certain kinds of tight loops such as those found in digital signal processing, where undefined integer overflow can buy (I have heard) 50% speedup on some real codes.

Similarly, Java’s termination semantics are stronger than C++0x’s and perhaps stronger than C’s.  The stronger semantics have a cost: the optimizer is no longer free to, for example, move side effecting operations before or after a potentially non-terminating loop.  So Java will either generate slower code, or else the C/C++ optimizer must become more sophisticated in order to generate the same code that Java does.  Does this really matter?  Is is a major handicap for compiler vendors?  I don’t know, but I doubt that the effect would be measurable for most real codes.

# Worse is Better

Richard Gabriel’s classic Worse is Better essay gives the example where UNIX has worse semantics than Multics: it permits system calls to fail, forcing users to put them in retry loops.  By pushing complexity onto the user (which is worse), UNIX gains implementation simplicity, and perhaps thereby wins in the marketplace (which is better).  Pushing nonintuitive termination behavior onto the user, as C++0x does, is a pretty classic example of worse is better.

# Hall of Shame

These C compilers known to not preserve termination properties of code: Sun CC 5.10, Intel CC 11.1, LLVM 2.7, Open64 4.2.3, and Microsoft Visual C 2008 and 2010.  The LLVM developers consider this behavior a bug and have since fixed it. As far as I know, the other compiler vendors have no plans to change the behavior.

These C compilers, as far as I know, do not change the termination behavior of their inputs: GCC 3.x, GCC 4.x, and the WindRiver Diab compiler.

# Acknowledgments

My understanding of these issues benefited from conversations with Hans Boehm and Alastair Reid.  This post does not represent their views and all mistakes are mine.

## Book Review: Street-Fighting Mathematics

The Trinity test occurred on a calm morning.  Enrico Fermi, one of the observers, began dropping bits of paper about 40 seconds after the explosion; pieces in the air when the blast wave arrived were deflected by about 2.5 meters.  From this crude measurement, Fermi estimated the bomb’s yield to be ten kilotons; he was accurate within a factor of two.  Although Street-Fighting Mathematics does not address the problem of estimating bomb yields, it gives us a reasonably generic toolbox for generating quantitative estimates from a few facts, a lot of intuition, and impressively little calculus.  As one of the reviews on Amazon says, this book makes us smarter.

Street-Fighting Mathematics — the title refers to the fact that in a street fight, it’s better to have a quick and dirty answer than to stand there thinking about the right thing to do — is based on the premise that we can and should use rapid estimation techniques to get rough answers to difficult problems.  There are good reasons for preferring estimation over rigorous methods: the answer is arrived at quickly, the full set of input data may not be needed, and messy calculus-based or numerical techniques can often be avoided.  Perhaps more important, by avoiding a descent into difficult symbol pushing, a greater understanding of the problem’s essentials can sometimes be gained and a valuable independent check on rigorous — and often more error prone — methods is obtained.

Chapter 1 is about dimensional analysis: the idea that by attaching dimension units (kg, m/s2, etc.) to quantities in calculations about the physical world, we gain some error checking and also some insight into the solution.  Dimensional analysis is simple and highly effective and it should be second nature for all of us.  Too often it isn’t; my guess is that it gets a poor treatment in secondary and higher education.  Perhaps it is relevant that about ten years ago I went looking for books about dimensional analysis and found only one, which had been published in 1964 (Dimensional Analysis and Scale Factors by R.C. Pankhurst).  If Mahajan had simply gone over basic dimensional analysis techniques, it would have been a useful refresher.  However, he upps the ante and shows how to use it to guess solutions to differential and integral equations: a genuinely surprising technique that I hope to use in the future.

Chapter 2 is about easy cases: the technique of using degenerate cases of difficult problems to rapidly come up with answers that can be used as sanity checks and also as starting points for guessing the more general solution.  Like dimensional analysis, this is an absolutely fundamental technique that we should all use.  A fun example of easy cases is found not in Street-Fighting Mathematics, but in one of Martin Gardner’s books: compute the remaining volume of a sphere which has had a cylindrical hole 6 inches long drilled through its center. The hard case deals with spheres of different radii. In contrast, if we guess that the problem has a unique solution, we’re free to choose the easy case where the diameter of the cylinder is zero, trivially giving the volume as 36π cubic inches. Many applications of easy cases are simple enough, but again Mahajan takes it further, this time showing us how to use it to solve a difficult fluid flow problem.

Chapter 3 is about lumping: replacing a continuous, possibly infinite function with a few chunks of finite size and simple shape.  This is another great technique.  The chapter starts off easily enough, but it ends up being the most technically demanding part of the book; I felt seriously out of my depth (it would probably help if I had used a differential or integral equation in anger more recently than 1995).

Chapter 4 is about pictorial proofs: using visual representations to create compelling mathematical explanations where the bare symbols are non-intuitive or confusing.  This chapter is perhaps the oddball: pictorial proofs are entertaining and elegant, but they seldom give us the upper hand in a street fight.  I love the example where it becomes basically trivial to derive the formula for the area of a circle when the circle is cut into many pie-pieces and its circumference is unraveled along a line.

Chapter 5 is “taking out the big part”: the art of breaking a difficult problem into a first-order analysis and one or more corrective terms.  The idea is that analyzing the big part gives us an understanding of the important terms, and also that in many cases we may end up computing few or none of the corrections since the first- or second-order answer may turn out to be good enough.  Mahajan introduces the idea of low-entropy equations: an appealing way of explaining why we want and need simplicity in street-fighting mathematics.

Finally, Chapter 6 is about reasoning by analogy: attacking a difficult problem by solving a related, simpler one and then attempting to generalize the result.  The example of how many parts an n-dimensional space is divided into by introducing some n-1 dimensional constructs is excellent, and ends up being quite a bit more intricate than I’d have guessed.  This chapter isn’t the most difficult one, but it is probably the deepest: analogies between different areas of mathematics can border on being spooky.  One gets the feeling that the universe is speaking to us but, like children at a cocktail party, we’re not quite putting all of the pieces together.

Back of the envelope estimation is one of the main elements of a scientist or engineer’s mental toolkit and I’ve long believed that any useful engineer should be able to do it, at least in a basic way.  Others seem to agree, and in fact quantitative estimation is a lively sub-genre of the Microsoft / Google / Wall Street interview question.  Speaking frankly, as an educator of engineers in the US, our system fails somewhat miserably in teaching students the basics of street-fighting mathematics.  The problem (or rather, part of it) is that mathematics education focuses on rigorous proofs and derivations, while engineering education relies heavily on pre-derived cookie-cutter methods that produce little understanding.  In contrast, estimation-based methods require strong physical intuition and good judgment in order to discard irrelevant aspects of a problem while preserving its essence.  The “rapidly discard irrelevant information” part no doubt explains the prevalence of these questions in job interviews: does anyone want employees who consistently miss the big picture in order to focus on stupid stuff?

In summary this is a great book that should be required reading for scientists and engineers.  Note that there’s no excuse for not reading it: the book is under a creative commons license and the entire contents can be downloaded free. Also, the paper version is fairly inexpensive (currently \$18 from Amazon).  The modes of thinking in Street-Fighting Mathematics are valuable and so are the specific tricks.  Be aware that it pulls no punches: to get maximum benefit, one would want to come to the table with roughly the equivalent of a college degree in some technical field, including a good working knowledge of multivariate calculus.

## Book Review: Pale Fire

Pale Fire is a 999-line poem written by John Shade.  It is also a novel by Vladimir Nabokov that contains an introduction by Charles Kinbote, the poem, and Kinbote’s extended commentary on the poem.

On the surface, Pale Fire is straightforward.  The poem is a touching — but not, it would seem, terribly good — autobiography and meditation on mortality.  Kinbote turns out to be the deposed king of Zembla, a small country in northern Europe, and is hilariously self-important, generally telling his own story rather than commenting on the poem.  When he briefly returns to the poem, it is usually to misread it entirely.  Nabokov clearly had great fun writing this, and he deadpans it all the way through.

Looking a little deeper, the story is more tangled.  Kinbote is detached from reality and we begin to suspect that he is not really a deposed king, that Zembla may not exist (within the novel), and that Kinbote may not, in fact, even exist.  One thing I think we can take for granted is that Shade is dead at the end of the novel; what author would waste “Shade” on a living man?  Second, it is pretty obvious that Kinbote is insane and/or a fabrication. Towards the end of the novel the Kinbote voice begins to slip and very late the bizarre Gradus/Gray distinction is made.

In the end it seems clear that Kinbote has murdered Shade and stolen the Pale Fire manuscript, that Gradus/Gray is a total fabrication, and that Kinbote is writing his commentary while hiding from the law.  I’ve heard other explanations, for example that Gray is the murderer; this doesn’t feel right.  Also Kinbote has told us that playing word golf he can turn “live to dead in five steps,” surely not a coincidence.

Pale Fire is a wonderful puzzle. It is strongly reminiscent of Gene Wolfe’s work, Peace in particular.  My guess would be that some of the odder words used in Wolfe’s novels (nenuphar, psychopomp) indicate that he was influenced by Pale Fire.

## Book Review: Surviving Your Stupid Stupid Decision to go to Grad School

Good jobs have barriers to entry.  Sometimes these barriers are natural (not everyone is capable of writing a novel or being a leader) and sometimes they are artificial (not everyone is born in the right place or to the right parents).  Many well-paid jobs requiring very specialized skills are protected by — among other mechanisms — the requirement that applicants have an advanced degree.  For some people, the substantial costs of acquiring one of these degrees are worth it, for others they are not.  Because graduate students’ experience varies greatly across disciplines, across departments, and across individuals within a specific department, it’s hard to make useful generalizations about it.

Surviving Your Stupid Stupid Decision to Go to Grad School, nevertheless, does a good job capturing and satirizing important universal aspects of the graduate school experience: the insecurity, the low or nonexistent pay, the whims of advisers and committees, the potential for wasting what could be productive and happy years, etc.  Because the book is targeted at people already in grad school, its purpose is not to inform, but rather to entertain and also to reassure people that some of the more absurd parts of grad student life are, in fact, widely experienced.  No serious advice is given until the epilogue, which manages to say what is perhaps the single most important thing that a grad student can hear: don’t just sit there, take control.

Surviving is very funny in places, but for my taste it tries too hard.  The hit rate for jokes is not that high; one hopes that Ruben — who, according to the back cover, has a stand-up comedy act — doesn’t quit his day job. I read it on a plane flight; not just any plane flight, but one with my three and five year old boys, and without my wife.  As anyone who flies with kids can tell you, it’s not a good time for heavy reading.  An undistracted reader could polish the book off in an hour.

One day soon, I am going to leave this book in the lab where my grad students work.  Then I am going to walk out, chuckling in an evil way.

## How to Get a Grant (from NSF, for Computer Scientists)

Learning to write good grant proposals is hard. First, while we learn to write papers in grad school, most of us only learn to write grants after completing a PhD, and possibly only after getting a faculty position. Second, the hit rates for grants are typically lower than for papers. Third, it’s fundamentally harder to write compelling text about a problem that you have not yet solved. For similar reasons, the PhD proposal is a major stumbling block for many folks. Fourth, grant review panels are smaller and have much higher variance in expertise than do top-tier conference program committees. Finally, there is less visibility into the proposal evaluation process than there is into the the paper evaluation process.

The upshot of all of these difficulties is that writing grant proposals is a bit of a frightening activity. One can spend a lot of time writing proposals before getting a hit. This is frustrating and, if it goes on very long, career-limiting. (Tenure committees usually evaluate candidates on research, teaching, and service — but all too often money is the ten-ton elephant hiding in the corner.)

This writeup is random thoughts, it’s in no way complete. You have to read the call for proposals and parts of the GPG carefully. You should talk to the program managers. You need to figure out how to work with your sponsored projects office and with your department’s internal grant-writing support people. Others have put useful information on the net; I found Cindy Grimm’s advice to be particularly good.

# Writing a Proposal

To be funded, a proposal has to receive good reviews from respected researchers in your field. To get good reviews, there are three main things you need to show: a vision, a plan, and a capability. Let’s look at these in more detail, but first note that these don’t really match up with the parts of an NSF proposal as described in the Grant Proposal Guide. Rather, these are things that need to come out of the proposal as a whole.

## Vision

The median annual income for a household in the United States is around \$50,000. When you write a grant proposal, you’re typically asking the government to give you 2 to 100 times this amount. Think about it: every dollar you request will have to come from some hapless taxpayer (or corporation, or increase the debt…) and additionally won’t be used to cure cancer or put a person on Mars (unless, of course, your work directly attacks one of these problems). My opinion is that when you ask for this level of funding, the request had better be backed up by a damn compelling vision for how the world will be different if the money is awarded.

The vision part of a proposal is tricky. It has to be concise — probably around a page — and compelling, while remaining strongly connected to the proposed work. You can’t spin a story about ending disease and poverty, then immediately segue into open problems of the 5-bit frobnicator. The best vision parts of proposals leave panelists feeling that they have a mandate: the problem is so important that the work has to be funded no matter what. I’ve read proposals like this, they’re rare and wonderful. They always get funded.

In summary, the purpose of this part of a proposal is to convince your review panel that the problem that you are attacking matters, that it needs to be solved. It must not make people bored or irritated.

## Plan

Here’s the really hard part: you have to convince the proposal review panel that you have an approach: a plan of attack for your chosen research problem. You must share your ideas, insights, and early results in order to paint a picture of a plausible research program that will come to fruition in 3-5 years. This is not the place to hold back on your most ingenious idea for fear of having it stolen. In general, it’s hard to find a piece of work that is plausibly doable, but that doesn’t sound like more of the same. Panels, in general, are not very interested in funding more of the same, they want new approaches.

One interesting question is: How much work should you do before writing a proposal? I’ve heard people say that you should basically have finished a piece of work before writing the proposal about it, and then you use that grant to fund the next piece of work. This seems pretty dishonest and in any case this approach would be highly risky if you’ve published the work. On the other hand, the reality is that it’s pretty tough to write a convincing proposal without having some kind of preliminary results. So I guess in the end the answer is pretty simple: you strike a balance that feels right.

## Capability

A good proposal makes it clear that you are the best person in the world to do the work that you are proposing. You absolutely do not want a panelist thinking “Gee… that’s a neat idea, but who’s this schmuck? I bet Jones at MIT could knock off that project in six months.”

The capability to do research can be demonstrated in different ways. First, even if you’ve never received NSF support, you should briefly describe your previous accomplishments in the area of interest. Second, you might want to find an excellent (and probably more senior) collaborator who can be a co-PI on the proposal. Third, you can demonstrate capability by association, whether it is with equipment such as the huge cluster of Cray-27 nodes available only at your institution, or with people, such as your unique industrial contacts.

It can be daunting to realize that your competition may have been working in the field for 25 years, may run an operation with a budget 20 times larger than yours, and may have won every award the there is to get. In fact, these people often write incredibly polished proposals. However, it is highly likely that established people are working on established ideas. It is only the very best people in the field who keep doing extremely original work year after year, and you might as well just hope you don’t come up against them too often. In contrast, as a new person in the field you should have new ideas and approaches to bring to the table. Play this up in the proposal: it’s not just business as usual.

## Other Issues

It’s important to make life easy for the people who will be evaluating your proposal. Assume yours is buried in the middle of a pile of about 20 proposals, and it will be read (probably on a plane to DC) by an exhausted person who would rather be doing something different. To make things easy for reviewers, keep it simple. Don’t assume everyone is interested in the deepest mathematics that you are capable of writing about. Use plenty of figures and graphs to break up the flow of text. Since some panelists like to skip around, try to make parts of the proposal as self-contained as possible and use informative section titles. Don’t refer to the previous work as nascent or vacuous since probably that work’s authors are on the panel.

Many of your colleagues have served on NSF panels. Get them to read your proposal and give feedback. Unfortunately, this will require that the proposal not be coming together on the day of the deadline.

# Getting a Grant

Ok, so far this post has actually been about how to write a proposal, now let’s talk about how to get a grant. Realistically, unless you are both very good and very lucky, you need to submit more proposals than you are really interested in writing. One of my colleagues (not jokingly, I suspect) talks about submitting them faster than they can be rejected. Writing lots of proposals is sort of a bummer since each one keeps you from doing something else important, like writing a paper. Also, you never, ever want to write a proposal for work that you don’t actually want to do, since of course that proposal would end up being the only one funded.

When a proposal is rejected there are several things to keep in mind. First, it may not have been reviewed by any experts in your exact field. Second, if you revise and resubmit the proposal, it will be evaluated by an entirely new panel. A friend of mine once had a proposal rejected, but with strong reviews — it was barely in the not-funded category. He diligently revised the proposal and the next (much better) version got only lukewarm reviews. Again, he revised and resubmitted and the third time around got quite negative reviews. This kind of experience is not terribly uncommon and for this reason a different friend of mine makes a point of never reading his NSF reviews at all (or so he claims).

I enjoy going out drinking with my colleagues, although it only seems to happen a few times a year. It should come as a surprise to nobody that professors are natural bullshitters and people always have good stories: nearly destroying a ticket booth at Alta while doing avalanche control work, barely sub-nuclear pyrotechnic displays out in the desert, skiing across Siberia during the good old days of the USSR, things like that. It is perhaps relevant that one of my colleagues, Erik Brunvand, is the son of the guy who popularized the term “urban legend.” We also tell work stories; these are slightly less colorful but this is one of my favorites:

At some point before my time at Utah, some professors got together and decided to bring a few principles to the chaotic process that is graduate admissions. The experiment worked like this.  First, the applications to graduate school from a number of previous years were gathered up and a spreadsheet was created containing every current graduate student in addition to all of the quantitative data from their applications.  This spreadsheet was sorted by various fields such as GRE score, undergraduate GPA, etc.  Then, the resulting lists of names were passed out to the faculty without telling them what the sorting keys were. Each professor was asked to choose which list or lists best sorted the graduate students by ability. Finally, the results were tallied up.  The most important predictor of grad student quality turned out to be whether the student arrived with a MS degree or not, followed closely by social security number and then phone number.  GRE and GPA were not predictors at all.

## Can I Stop Lecturing Now?

In Fall 2010 I’ll teach my Advanced Embedded Systems course for the fifth time.  This is a class that I enjoy teaching and that students generally enjoy taking.  I created it from scratch and am fairly proud of it.  There’s just one problem: I’m ready to stop giving the lectures.  First, although I tweak and improve them each year, I’m getting somewhat bored with them.  Second — and much more important — I’ve become convinced that (in this case, not necessarily in general) lectures aren’t the best use of class time.  For one thing, even when students are actively engaged with the material, it’s hard for them to sit still and listen to someone for 75 minutes.  For another, the thing I really want students to get from the class is a taste of real-world embedded systems design and implementation.  They should be learning skills such as working in a team, understanding requirements, evaluating software architectures and other implementation choices, creating maintainable code, debugging in challenging environments, reviewing code, and figuring out how to test complex combinations of hardware and software.

Here’s my idea.  First, while giving the lectures this fall, I’ll arrange for good-quality videos to be recorded, so I can put them online.  Second, after this fall, I’ll give few or no lectures but rather ask the students to watch at least some of them online, outside of class.  Then we’ll spend the class periods — typically two 75-minute sessions per week — working out the design and implementation of a more open-ended and aggressive collection of projects than I had previously given (note that there are 3 hours of lab time per week in addition to the lecture time).  Also, after teams get their projects to work, I’ll require them to integrate these projects with each other, for example using a serial line or CAN bus.  Previously, when I have required interaction between different teams’ projects, this has turned out to be far more challenging than the students anticipated, even for relatively simple projects such as distributed time synchronization or a consensus protocol.  Also, I’d like to spend a fair amount of time in class doing code reviews.  It turns out that for a variety of reasons, such as the fact that they always get to throw away code after writing it, a lot of junior/senior-level CS and CE students are not exactly the kind of coders I’d hire for an ambitious embedded project.  Regular code reviews can help turn coders into real programmers. One final idea that I keep wanting to try, but haven’t yet, is to force teams to exchange code bases with each other partway through the semester.  This is basically evil and will probably cause me to take a hit on the teaching evaluations, but I think it is guaranteed to be a character-building experience.

Anyway, it’s possible that this experiment will make my course more awesome, or it may tank miserably.  I’d love to hear feedback and advice from readers, either in the comments here or in email. In a couple years I’ll let you all know how it went.

## Book Review: Sustainable Energy — Without the Hot Air

The premises are simple.  First, energy consumption must be met by energy production.  Second, use of fossil fuels is unsustainable.  Third, no magical technological fix to the energy problem is going to arrive.  Finally, to understand a sustainable future, we must think quantitatively.  That is, to proceed with a debate about, for example, wind or solar without a firm understanding of the magnitudes involved is foolishness.  This last premise is where Sustainable Energy departs significantly from the vast majority of information available in the popular press, which trumpets dubious oversimplifications like “hydrogen is good” or “unplug your phone charger when not being used” without providing the larger context.

The central question this book deals with is: If we must stop using fossil fuels, can we keep expending the amount of energy that we currently do, or will our standard of living necessarily contract?  To attack this question, the first part of the book alternately describes the components of energy demand and potential alternative sources such as tides, wind, and solar.  Impressively, MacKay manages to create a certain amount of tension between the supply and demand sides of the equation, making this initial part of the book more entertaining than it perhaps sounds.

The second part investigates sustainable energy in more detail.  How much can we save by mass transit?  Electric cars?  Better insulation?  On the production side, what can we do with nuclear power?  With solar energy from Africa’s vast deserts?  The capstone of Sustainable Energy is a collection of five energy plans for Britain, each making different tradeoffs between nuclear or not, domestic power vs. imported, etc.  The message is clear: we can live without fossil fuels, but only at considerable expense and inconvenience.

MacKay’s presentation of the material is clever.  He writes in an engaging way, including plenty of personal opinions and anecdotes.  The graphs and figures are first-rate.  Person-sized units of power such as kWh per day help us understand energy at a level where we can each make a difference, as opposed to the more remote view promoted by nation-sized units like GWh per day.  Facts are backed up by simple, intuitive physical explanations that will appeal strongly to scientific-minded readers.  Finally, MacKay includes a lot of specific recommendations for ways that individuals can make a difference in their own energy consumption, often supported by data and stories from his own life.

One could easily quibble with some of the specific quantities used in the book, such as the average wind speed in Scotland or the amount of unmined uranium in Africa.  However, to do this is to miss the point.  MacKay isn’t so much providing a collection of facts as a framework for thinking about energy supply and demand.  It is a certainty that over the next few years estimates of energy reserves will be refined and technologies like solar panels and batteries will improve a bit faster or slower than expected.  This new information may alter the nuances of MacKay’s message, but the fundamentals are so large that no invention short of practical fusion will render the book obsolete.  Similarly, an American reader could become irritated with the UK-centric nature of the writing, but again this misses the point: from the American point of view the major difference is that in the UK they use far less energy per person than we do.  Also, towards the end of the book MacKay specifically addresses the question of adapting his energy plans for the USA.

In summary, entering the energy debate without reading this book is like debating Christian theology without having read the Bible.  MacKay has created a work that is enjoyable to read and also provides a framework for understanding and discussing one of the most important problems faced by the developed world in the next 50 years.  An electronic copy of the entire book can be downloaded for free.

## C Compilers Disprove Fermat’s Last Theorem

[Update: I wrote another post on this topic that may explain the underlying issues more clearly.]

Obviously I’m not serious: compilers are bad at solving high-level math problems and also there is good reason to believe this theorem cannot be disproved. But I’m getting ahead of myself. Recently — for reasons that do not matter here — I wanted to write C code for an infinite loop, but where the compiler was not to understand that the loop was infinite.  In contrast, if I had merely written

`  while (1) { }`

or

`  for (;;) { }`

most optimizing compilers would see the loop’s inability to exit and generate code accordingly. For example, given this C code:

```  void foo (void)
{
for (;;) { }
open_pod_bay_doors();
}
```

Most compilers will emit something like this:

```  foo:
L2: jmp L2```

In this case the compiler emits neither the call to open_pod_bay_doors() nor the final return instruction because both are provably not executed.

Perhaps interestingly, LLVM/Clang recognizes that this slightly obfuscated infinite loop never exits:

```  unsigned int i = 0;
do {
i+=2;
} while (0==(i&1));
```

Faced with a loop optimizer that has some brains, I decided to stop messing around and wrote a loop that should thwart any compiler’s termination analysis:

```  const int MAX = 1000;
int a=1,b=1,c=1;
while ((c*c*c) != ((a*a*a)+(b*b*b))) {
a++;
if (a>MAX) {
a=1;
b++;
}
if (b>MAX) {
b=1;
c++;
}
if (c>MAX) {
c=1;
}
}
```

This loop only terminates if it finds a counterexample to a special case of Fermat’s Last Theorem.  Fermat’s Last Theorem, of course, states that no solution exists for the equation an + bn = cn for positive integers a, b, and c and for integer n>2. Here n=3 and a,b,c are in the range [1..1000]. On a platform with 32-bit integers 1000 is a reasonable maximum because 2*10003 is not much less than 231.

It turns out that when optimizations are enabled, several compilers (LLVM/Clang 2.7, Open64-x86 4.2.3, Sun CC 5.10, and Intel CC 11.1.072) go ahead and permit this loop to terminate.  Specifically, when the loop is enclosed in a function, the compilers emit x86 assembly which looks something like this:

```  fermat:
ret
```

The implication, of course, is that the compiler has disproved Fermat’s Last Theorem.  Faced with this incredible mathematical discovery, I held my breath and added a line of code at the end of the function to print the counterexample: the values of a, b, and c.  Unfortunately, with their bluffs called in this fashion, all of the compilers emitted code that actually performed the requested computation, which of course does not terminate.  I got the feeling that these tools — like Fermat himself — had not enough room in the margin to explain their reasoning.

What is really going on here is that compiler optimizations and termination analysis have long been at odds.  In other words, if compilers were obligated to preserve the termination properties of the code they translate, they would be unable to perform (or would have more difficulty performing) some of the optimizations that they use to create efficient code.  A choice is being made by compiler developers — probably consciously, though it’s hard to be sure — to prefer speed over correctness. The news, however, is not all bad: Microsoft’s C compiler, the Wind River Diab C compiler, and several versions of GCC all did the right thing, changing the termination properties of none of the examples I tried.

Update from Sat 5/1: It turns out the LLVM folks have been working on this problem lately and their latest SVN now does not contain this bug.  Very nice!

Update from Sat 5/1: Someone on Reddit noticed (and one of my students confirmed) that the Microsoft compilers do have termination bugs.  The compilers in both Visual Studio 2008 and 2010 generate code for the Fermat function, but then calls to this function are dropped because it is believed to be free of side effects (this was exactly what LLVM did before they fixed the problem).

# Update from Friday 4/30

I’ll try to clarify a few of the questions that have come up on Reddit and in the comments here.  Also I fixed a mistake in the statement of Fermat’s Last Theorem that someone on Reddit pointed out.  Thanks!

Q: Does this actually matter at all?

A: Yes, but in very specialized situations that usually only come up when developing embedded software.  One example is described here: the poster wants the program being simulated to hang when main() exits, but LLVM deletes the loop that was intended to hang up the processor.  The workaround was to compile the code with optimizations turned off.  Another example happens when an embedded system has updated its firmware and wants to do nothing until the watchdog timer reboots the processor into the new version.  It’s no coincidence that gcc and the Wind River C compiler — both of which are heavily used in the embedded world — get termination right.

Q: Since infinite loops are bad style, isn’t it OK for the compiler to terminate them?  Shouldn’t people be putting the CPU to sleep, blocking the running thread, or whatever?

A: First, not all programs have an operating system or even a threading system to call out to. Embedded software commonly runs on the bare metal.  Second, the meaning of a program is defined by the language standard and style has nothing to do with it.  See my earlier post The Compiler Doesn’t Care About Your Intent.

Q: Does the C standard permit/forbid the compiler to terminate infinite loops?

A: The compiler is given considerable freedom in how it implements the C program, but its output must have the same externally visible behavior that the program would have when interpreted by the “C abstract machine” that is described in the standard.  Many knowledgeable people (including me) read this as saying that the termination behavior of a program must not be changed.  Obviously some compiler writers disagree, or else don’t believe that it matters.  The fact that reasonable people disagree on the interpretation would seem to indicate that the C standard is flawed.  In contrast, the Java language definition is quite clear that infinite loops may not be terminated by the JVM.

Q: Are you saying the compiler should do termination analysis?  That’s impossible by trivial reduction to the halting problem.

A: Termination analysis does not need to be part of the compiler at all. However, I (and others) would claim that the compiler should perform a termination analysis of any useless loop before deleting it.  Although the general problem is not computable, many specific instances can be easily solved.

Q: Does the Fermat code in this post execute any signed integer overflows or other undefined behaviors?

A: I don’t believe so.

# Update from Saturday 5/1

Q: Didn’t you know Fermat’s Last Theorem was proved in 1995?

A: I did know that.  Since I got my math degree in 1995, it would have been very difficult for me to miss this event :). I was making a weak joke and also referring to the fact that proofs, especially complicated ones, can contain errors. In fact, as someone noted in the comments, Wiles’ initial proof was wrong. Also note that the n=3 special case was proved much earlier, in 1770.

Q: What’s the best workaround if you really want an infinite loop in C?

A: As several people have pointed out, looping on a volatile-qualified variable is probably the best choice. But keep in mind that compilers don’t always respect volatile….

# One more update from Saturday 5/1

Here’s a fun complete program that is more compelling than the code above because it explicitly uses a return value from the “theorem disproved” branch of the code:

```int fermat (void)
{
const int MAX = 1000;
int a=1,b=1,c=1;
while (1) {
if (((a*a*a) == ((b*b*b)+(c*c*c)))) return 1;
a++;
if (a>MAX) {
a=1;
b++;
}
if (b>MAX) {
b=1;
c++;
}
if (c>MAX) {
c=1;
}
}
return 0;
}

#include <stdio.h>

int main (void)
{
if (fermat()) {
printf ("Fermat's Last Theorem has been disproved.\n");
} else {
printf ("Fermat's Last Theorem has not been disproved.\n");
}
return 0;
}
```

Here’s what the Intel and Sun compilers have to say:

```regehr@john-home:~\$ icc fermat2.c -o fermat2
regehr@john-home:~\$ ./fermat2
Fermat's Last Theorem has been disproved.
regehr@john-home:~\$ suncc -O fermat2.c -o fermat2
"fermat2.c", line 20: warning: statement not reached
regehr@john-home:~\$ ./fermat2
Fermat's Last Theorem has been disproved.```

Open64-x86 and LLVM/Clang 2.7 have the same behavior.  Although plenty of folks in the peanut gallery disagree, it seems perfectly clear to me that this is a serious error in these compilers.  I mean, why return 1?  Is that worse or better than returning 0?  Neither result makes any sense.