How Getting Tenure Is Supposed to Work

The other day Geoff Challen posted a blog entry about his negative tenure vote. Having spent roughly equal time on the getting-tenure and having-tenure sides of the table, I wanted to comment on the process a little. Before going any further I want to clarify that:

  • I know Geoff, but not well
  • I wasn’t involved in his case in any capacity, for example by writing a letter of support
  • I have no knowledge of his tenure case beyond what was written up in the post

Speaking very roughly, we can divide tenure cases into four quadrants. First, the professor is doing well and the tenure case is successful — obviously this is what everybody wants, and in general both sides work hard to make it happen. Second, the professor is not doing well (not publishing at all, for example) and the tenure case is unsuccessful. While this is hugely undesirable, at least the system is working as designed. Third, the professor is not doing well and the tenure case is successful — this happens, but very rarely and usually in bizarre circumstances, for example where the university administration overrules a department’s decision. Finally, we can have a candidate who is doing well and then is denied tenure. This represents a serious failure of the system. Is this what happened to Geoff? It’s hard to be sure but his academic record looks to me like a strong one for someone at his career stage. But keep in mind that it is (legally) impossible for the people directly involved in Geoff’s case to comment on it, so we are never going to hear the other side of this particular story.

So now let’s talk about how tenure is supposed to work. There are a few basic principles (I suspect they apply perfectly well to performance evaluations in industry too). First, the expectations must be made clear. Generally, every institution has a written document stating the requirements for tenure, and if a department deviates from them, decisions they make can probably be successfully appealed. Here are the rules at my university. Junior faculty need to look up the equivalent rules at their institution and read them, but of course the university-level regulations miss out on department-specific details such as what exactly constitutes good progress. It is the senior faculty’s job to make this clear to junior faculty via mentoring and via informal faculty evaluations that lead up to the formal ones.

If you look at the rules for tenure at Utah, you can see that we’re not allowed to deny tenure just because we think someone is a jerk. On the other hand, there is perhaps some wiggle room implied in this wording: “In carrying out their duties in teaching, research/other creative activity and service, faculty members are expected to demonstrate the ability and willingness to perform as responsible members of the faculty.” I’m not sure what else to say about this aspect of the process: tenure isn’t a club for people we like, but on the other hand the faculty has to operate together as an effective team over an extended period of time.

The second principle is that the tenure decision should not be a surprise. There has to be ongoing feedback and dialog between the senior faculty and the untenured faculty. At my institution, for example, we review every tenure track professor every year, and each such evaluation results in a written report. These reports discuss the candidate’s academic record and provide frank evaluations of strengths and weaknesses in the areas of research, teaching, and service (internal and external). The chair discusses the report with each tenure-track faculty member each year. The candidate has the opportunity to correct factual errors in the report. In the third and sixth years of a candidate’s faculty career, instead of producing an informal report (that stays within the department), we produce a formal report that goes up to the university administration, along with copies of all previous reports. The sixth-year formal evaluation is the one that includes our recommendation to tenure (or not) the candidate.

A useful thing about these annual evaluations is that they provide continuity: the reports don’t just go from saying glowing things about someone in the fifth year to throwing them under the bus in the sixth. If there are problems with a case, this is made clear to the candidate as early as possible, allowing the candidate, the candidate’s mentor(s), and the department chair to try to figure out what is going wrong and fix it. For example, a struggling candidate might be given a teaching break.

Another thing to keep in mind is that there is quite a bit of scrutiny and oversight in the tenure process. If a department does make a recommendation that looks bad, a different level of the university can overrule it. I’ve heard of cases where a department (not mine!) tried to tenure a research star who was a very poor teacher, but the dean shot down the case.

If you read the Hacker News comments, you would probably come to the conclusion that tenure decisions are made capriciously in dimly lit rooms by people smoking cigars. And it is true that, looking from the outside, the process has very little transparency. The point of this piece is that internally, there is (or should be) quite a bit of transparency and also a sane, well-regulated process with plenty of checks and balances. Mistakes and abuses happen, but they are the exception and not the rule.

Phil Guo, Sam Tobin-Hochstadt, and Suresh Venkatasubramanian gave me a bit of feedback on this piece but as always any blunders are mine. Sam pointed me to The Veil, a good piece about tenure.

Undefined Behavior: Not Just for Programming Languages

This is an oldie but goodie. Start with this premise:
a = b
Multiply both sides by a:
a2 = ab
Subtract b2 from both sides:
a2 – b2 = ab – b2
Factor the left side:
(a + b)(a – b) = ab – b2
Factor the right side:
(a + b)(a – b) = b(a – b)
Divide both sides by (a – b) and cancel:
a + b = b
Substitute b for a:
b + b = b
Finally, let b = 1 and simplify:
2 = 1

I ran into this derivation when I was nine or ten years old and it made me deeply uneasy. The explanation, that you’re not allowed to divide by (a – b) because this term is equal to zero, seemed to raise more questions than it answered. How are we supposed to keep track of which terms are equal to zero? What if something is equal to zero but we don’t know it yet? What other little traps are lying out there, waiting to invalidate a derivation? This was one of many times where I noticed that in school they seemed willing to teach the easy version, and that the real world was never so nice, even in a subject like math where — you would think — everything is clean and precise.

Anyway, the point is that undefined behavior has been confusing people for well over a thousand years — we shouldn’t feel too bad that we haven’t gotten it right in programming languages yet.

Principles for Undefined Behavior in Programming Language Design

I’ve had a post with this title on the back burner for years but I was never quite convinced that it would say anything I haven’t said before. Last night I watched Chandler Carruth’s talk about undefined behavior at CppCon 2016 and it is good material and he says it better than I think I would have, and I wanted to chat about it a bit.

First off, this is a compiler implementor’s point of view. Other smart people, such as Dan Bernstein, have a very different point of view (but also keep in mind that Dan doesn’t believe compiler optimization is useful).

Chandler is not a fan of the term nasal demons, which he says is misleadingly hyperbolic, since the compiler isn’t going to maliciously turn undefined behavior (UB) into code for erasing your files or whatever. This is true, but Chandler leaves out the fact that our 28-year-long computer security train wreck (the Morris Worm seems like as good a starting point as any) has been fueled to a large extent by undefined behavior in C and (later) C++ code. In other words, while the compiler won’t emit system calls for erasing your files, a memory-related UB in your program will permit a random person on the Internet to insert instructions into your process that issue system calls doing precisely that. From this slightly broader point of view, nasal demons are less of a caricature.

The first main idea in Chandler’s talk is that we should view UB at the PL level as being analogous to narrow contracts on APIs. Let’s look at this in more detail. An API with a wide contract is one where you can issue calls in any order, and you can pass any arguments to API calls, and expect predictable behavior. One simple way that an API can have a wider contract is by quietly initializing library state upon the first call into the library, as opposed to requiring an explicit call to an init() function. Some libraries do this, but many libraries don’t. For example, an OpenSSL man page says “SSL_library_init() must be called before any other action takes place.” This kind of wording indicates that a severe obligation is being placed on users of the OpenSSL API, and failing to respect it would generally be expected to result in unpredictable behavior. Chandler’s goal in this first part of the talk is to establish the analogy between UB and narrow API contracts and convince us that not all APIs want to be maximally wide. In other words, narrow APIs may be acceptable when their risks are offset by, for example, performance advantages.

Coming back to programming languages (PL), we can look at something like the signed left shift operator as exposing an API. The signed left shift API in C and C++ is particularly narrow and while many people have by now internalized that it can trigger UB based on the shift exponent (e.g., 1 << -1 is undefined), fewer developers have come to terms with restrictions on the left hand argument (e.g., 0 << 31 is defined but 1 << 31 is not). Can we design a wide API for signed left shift? Of course! We might specify, for example, that the result is zero when the shift exponent is too large or is negative, and that otherwise the result is the same as if the signed left-hand argument was interpreted as unsigned, shifted in the obvious way, and then reinterpreted as signed.

At this point in the talk, we should understand that “UB is bad” is an oversimplification, that there is a large design space relating to narrow vs. wide APIs for libraries and programming language features, and that finding the best point in this design space is not straightforward since it depends on performance requirements, on the target platform, on developers’ expectations, and more. C and C++, as low-level, performance-oriented languages, are famously narrow in their choice of contracts for core language features such as pointer and integer operations. The particular choices made by these languages have caused enormous problems and reevaluation is necessary and ongoing. The next part of Chandler’s talk provides a framework for deciding whether a particular narrow contract is a good idea or not.

Chandler provides these four principles for narrow language contracts:

  1. Checkable (probabilistically) at runtime
  2. Provide significant value: bug finding, simplification, and/or optimization
  3. Easily explained and taught to programmers
  4. Not widely violated by existing code that works correctly and as intended

The first criterion, runtime checkability, is crucial and unarguable: without it, we get latent errors of the kind that continue to contribute to insecurity and that have been subject to creeping exploitation by compiler optimizations. Checking tools such as ASan, UBSan, and tis-interpreter reduce the problem of finding these errors to the problem of software testing, which is very difficult, but which we need to deal with anyhow since there’s more to programming than eliminating undefined behaviors. Of course, any property that can be checked at runtime can also be checked without running the code. Sound static analysis avoids the need for test inputs but is otherwise much more difficult to usefully implement than runtime checking.

Principle 2 tends to cause energetic discussions, with (typically) compiler developers strongly arguing that UB is crucial for high-quality code generation and compiler users equally strongly arguing for defined semantics. I find the bug-finding arguments to be the most interesting ones: do we prefer Java-style two’s complement integers or would we rather retain maximum performance as in C and C++ or mandatory traps as in Swift or a hybrid model as in Rust? Discussions of this principle tend to center around examples, which is mostly good, but is bad in that any particular example excludes a lot of other use cases and other compilers and other targets that are also important.

Principle 3 is an important one that tends to get neglected in discussions of UB. The intersection of HCI and PL is not incredibly crowded with results, as far as I know, though many of us have some informal experience with this topic because we teach people to program. Chandler’s talk contains a section on explaining signed left shift that’s quite nice.

Finally, Principle 4 seems pretty obvious.

One small problem you might have noticed is that there are undefined behaviors that fail one or more of Chandler’s criteria, that many C and C++ compiler developers will defend to their dying breath. I’m talking about things like strict aliasing and termination of infinite loops that violate (at least) principles 1 and 3.

In summary, the list of principles proposed by Chandler is excellent and, looking forward, it would be great to use it as a standard set of questions to ask about any narrow contract, preferably before deploying it. Even if we disagree about the details, framing the discussion is super helpful.

Vigorous Public Debates in Academic Computer Science

The other day a non-CS friend remarked to me that since computer science is a quantitative, technical discipline, most issues probably have an obvious objective truth. Of course this is not at all the case, and it is not uncommon to find major disagreements even when all parties are apparently reasonable and acting in good faith. Sometimes these disagreements spill over into the public space.

The purpose of this post is to list a collection of public debates in academic computer science where there is genuine and heartfelt disagreement among intelligent and accomplished researchers. I sometimes assign these as reading in class: they are a valuable resource for a couple of reasons. First, they show an important part of science that often gets swept under the rug. Second, they put discussions out into the open where they are widely accessible. In contrast, I’ve heard of papers that are known to be worthless by all of the experts in the area, but only privately — and this private knowledge is of no help to outsiders who might be led astray by the bad research. For whatever reasons (see this tweet by Brendan Dolan-Gavitt) the culture in CS does not seem to encourage retracting papers.

I’d like to fill any holes in this list, please leave a comment if you know of a debate that I’ve left out!

Here are some more debates pointed out by readers: