The Truth About the Life of the Mind

[This piece is a followup to The Big Lie About the Life of the Mind.]

Being a professor, like any other job, has its pros and cons. You’d hope that one of the advantages would be that the job encourages a person to live a life of the mind. Otherwise what’s the point, right?  I was thinking about this and realized I didn’t know quite what “life of the mind” means. There’s no accepted definition that I’m aware of. Should I be reading a book every day? Writing a book every six months? Spending my evenings debating philosophy in coffee houses? Meditating?

Eventually I decided that while I’ve never written a book and don’t debate philosophy much, I live a fairly mindful life:

  • I spend a good number of hours per week just thinking. Probably not 10 hours, but definitely at least five, mostly while hiking on easy trails near my house. (Coherent thought at home, where the three and five year olds live, is challenging. I’ve long since given up trying to think at the office.)
  • I spend a good amount of time, again probably five hours in a typical week, reading things that I want to read. Here I’m talking about reading material that relates to being a researcher, but that I’m not forced to read in order to do my job. Generally I’m in the middle or one or two books and three or four papers.
  • My main job is doing research and most parts of the research process including planning it, doing it, and writing about it, require hard thinking.
  • My other main job is teaching. Some teaching time, maybe 5-10%, counts as good thinking time: How to make this concept clear? What kind of example best suits this audience?

On the other hand, let’s be realistic: I spend a ton of time in meetings, managing people, dealing with department politics, writing boring parts of grant proposals and papers, writing reports for funding agencies, solving (and ducking) budget problems, helping students learn to present and write, reviewing papers and proposals that are of marginal interest to me, grading exams, giving lectures I’ve given many times before, and traveling. There is no doubt that in terms of hours, these kinds of activities dominate my work life. I don’t despise any of them, but it would be hard to say they are indicative of a life of the mind.

My view has been that in the long run, being a professor makes no sense unless the job supports, to a substantial extent, a life of the mind. Clearly one can succeed perfectly well without this life, for example by mechanically cranking out papers and proposals, being a good administrator, teaching by rote, etc. Certainly there exist professors who operate this way. I’m not trying to be nasty — just observing that if they do live a life of the mind, this fact is not externally apparent.

An important thing I learned during my years as an assistant professor is that the job is not really structured to provide a life of the mind. Rather, it is structured — in the ideal case — to give a fledgling academic a reasonable chance to learn how to teach and to develop a full-blown research program including students, grants, and lots of papers.

To live a life of the mind, one has to carve out time. I’m sure that some people accomplish this through good time management. On the other hand, one can — as I do — simply carve out the time regardless of the consequences. For an assistant professor this would increase the risk of not getting tenure. This is unavoidable, unless you instead carve the time out of family life or sleep. For a tenured professor, the consequence is doing a slightly worse job at service, teaching, and short-term research.  In both cases, it’s a tradeoff, you have to just go ahead and make it under the assumption that the long-run payoff in terms of career happiness and productivity will be worth it.

But why would a person want to live a life of the mind in the first place? It’s not obviously an inherently desirable thing. Rather, I feel like most of my role models have been hard thinkers: they either wrote books that expanded my mind or told me things that I didn’t know and wouldn’t have ever thought of. I came to admire the kind of person who produced these thoughts, and somehow this feeling led me down the academic track. This is actually the best explanation I can give of why I’m a professor. Even three years before I took the job, I had no intention whatsoever of becoming an academic.

It should go without saying that there are probably much easier and less stressful ways to have a life of the mind than becoming a professor. For example, a person could get a degree in a practical, high-demand field such as nursing or plumbing or programming, become very good at it in order to be mobile and command a good salary, but work no more than 40 hours per week. There are 80 remaining non-sleep hours per week for reading and writing books, producing art, discussing philosophy, or anything else that seems appropriate.

The Big Lie About the Life of the Mind

Earlier this year Thomas Benton wrote an essay The Big Lie About the ‘Life of the Mind, skewering academic humanities in the United States. His thesis is that there is effectively a conspiracy to produce many more PhDs than there are faculty slots, and to keep the carrot of the tenure-track faculty position just out of reach so that graduate students and PhDs will work for little or no money and without job security.

Is there a Big Lie? A conspiracy? The conspiracy angle doesn’t ring true for me; rather, it’s a flawed system. But really, it doesn’t seem so difficult to be more or less completely honest with students about the realities of academic life, though undoubtedly this is easier in a field like mine where PhDs who fail to get faculty positions still have bright job prospects. Although a professor’s ethical position can be a bit complicated — requiring a balance of loyalty between the students, the institution, and the field — simple human decency seems to demand that students get straightforward advice when they need it.

An ethical conflict that comes up fairly often is when an undergrad asks whether she should go to grad school somewhere else, or stay at her current institution. The accepted wisdom, at least as I learned it, is that a student in this situation is always encouraged to leave. Not all professors agree, and also some who agree in principle are perfectly willing to bend the rule in practice to keep a talented student on the local team. Of course the situation can get genuinely murky when a student has good reasons — family or whatever — to stay local. Other examples where people differ: when an undergrad asks if he should go to grad school, or if a PhD student asks if she should pursue a faculty position. Many profs reflexively answer “yes” to both questions, but it would seem that a more nuanced view is called for. Would this person really benefit from an advanced degree? Would it give her a happier and more productive life than she’d otherwise have? I’ve actually caught a bit of flak from colleagues for creating a “should you go to grad school?” talk that doesn’t portray the PhD program in a sufficiently positive light.

In terms of advice for students, you first must find people you can trust and get their advice. In my experience most faculty will not lie to a student’s face about their prospects. If you’re exceptionally unlucky and find yourself in some sort of den of liars, use the Internet. Second, you have spent the last N years learning to think critically (if not, you’re sort of screwed); use this ability to answer some important questions:

  • Are you smarter and harder-working than the average grad student in your field? It’s no fun to be below average. Related question: Would you rather be in the dumbest, laziest 20% at MIT or the smartest, most diligent 20% at a lower-ranked school? (Not trying to be snooty here — I didn’t apply to MIT and wouldn’t have gotten in if I had.)
  • Are people in your field getting the jobs they want? It should be possible to find data, not just anecdotes.
  • If you were highly successful in grad school, would that lead to a life that you want?
  • If grad school doesn’t go well for you, are you going to be really mad that you wasted a few years?

A bit of economic thinking can go far towards understanding the lay of the land. Is your field expanding, contracting, or staying the same size? A non-expanding field makes it extremely tough to get a tenure-track faculty position since a slot only appears when someone retires. What are the magnitudes of the various sources of money (state salaries, government grants, tuition, etc.)? What are the magnitudes of the various money sinks (faculty / postdoc / staff salaries, student stipends, etc.)?

In summary, I’d like to think there is no Big Lie about academia, although there may well be plenty of little lies and white lies. Combat this by not being a Pollyanna. Being a professor is a pretty good job and all good jobs have significant barriers to entry. Find people you trust, ask them hard questions, and think hard about the answers. If you don’t like the answers or if things smell fishy, there’s probably a reason.

[The Truth About the Life of the Mind is a loose followup to this piece.]

Why Take an Embedded Systems Course?

Embedded systems are special-purpose computers that users don’t think of as computers. Examples include cell phones, traffic light controllers, and programmable thermostats. In earlier posts I argued why any computer scientist should take a compilers course and an operating systems course. These were easy arguments to make since these areas are core CS: all graduates are expected to understand them. Embedded systems, on the other hand, often get little respect from mainstream computer scientists. So why should you take a course on them?

Most Computers are Embedded

Around 99% of all processors manufactured go into embedded systems.  In 2007 alone, 2.9 billion chips based on the ARM processor architecture were manufactured; essentially all of these were used in embedded applications. These processors live in your car, appliances, and toys; they are scattered throughout our buildings; they are critical to efficient operation of the infrastructure providing transportation, water, and power. More and more the world depends on embedded systems; as a technical expert, it’s useful to understand how they work. The market for desktop computers is pretty much saturated; the embedded world is growing and looks to continue to do so as long as people continue to find it valuable to place computation close to the world.

Embedded Programming Can Be Fun and Accessible

Make Magazine has done a fantastic job popularizing embedded systems projects. Lego Mindstorms, Arduinos, and the like are not that expensive and can be used to get a feel for embedded programming.  Controlling the physical world is addictive and fun; head over to Hack A Day and search for “paintball sentry,” I defy any nerd to tell me with a straight face that this stuff is not totally cool. This spring I heard a fantastic talk by Sebastian Thrun about his team’s winning effort in the DARPA Grand Challenge (some of his presentation materials are online).

Embedded is Different

Monitoring and controlling the physical world is very different from other kinds of programming. For example, instead of nice clean discrete inputs, you may find yourself dealing with a stream of noisy accelerometer data. When controlling a motor, your code suddenly has to take the momentum of an actual piece of metal into account; if you’re not careful, you might break the hardware or burn out the driver chip. Similarly, robots live in a bewildering world of noisy, conflicting sensor inputs; they never go quite in a straight line or return to the angle they started at. Solving all of these problems requires robust methods that are very different from the algorithmically-flavored problems we often solve in CS.

Embedded Makes You Solve Hard Problems

Difficult embedded programming problems force you to create highly reliable systems on constrained platforms. The software is concurrent (often using both interrupts and threads), must respect timing constraints from hardware devices and from the outside world, and must gracefully deal with a variety of error conditions. In summary, many of the hardest programming problems are encountered all at once.

Debugging facilities may be lacking. In the worst case, even the humble printf() is not available and you’ll be debugging using a logic analyzer and perhaps some LEDs. It brings me joy every year to sit a collection of CS students down in a lab full of logic analyzers; at first most of them are greatly confused, but by the end of the semester people are addicted to (or at least accustomed to) the ability to look at a real waveform or measure a pulse that lasts well under a microsecond.

Modern high-level programming environments are great, but they provide an awful lot of insulation from the kinds of bare-metal details that embedded programmers deal with all the time. People like me who learned to program during the 1980s have a reasonable intuition for what you can accomplish in 1 KB of RAM. On the other hand, programmers raised on Java do not. I’ve helped students working on a research project in sensor networks where their program didn’t work because it allocated more than a thousand times more RAM than was available on the platform they were trying to run on. Many years ago I spent far too long debugging problems that came from calling a printf() variant that allocated about 8 KB of stack memory from a thread that had 4 KB of stack.

All of these difficulties can be surmounted through careful design, careful implementation, and other techniques. By learning these things, students acquire valuable skills and thought processes that can also be applied to everyday programming.

Embedded is in Demand

I don’t have good data supporting this, but anecdotally the kind of student who does well in an embedded systems course is in high demand. A lot of CS graduates understand only software; a lot of EE and ME graduates can’t program their way out of a paper bag. Students who combine these skill sets — regardless of what department gives them a degree — are really valuable.

An Epidemic of Rat Farming

In Hanoi, as the story goes, the French placed a bounty on rat pelts. The locals responded by farming rats. A child who gets candy for cleaning up a big mess is likely to create another mess the next day. These are perverse incentives: incentives that have unintended and often undesirable side effects. As a particularly stupid example, I recently decided to start putting only one sugar cube in my morning coffee and then caught myself pouring small cups of coffee and having two or three.

Once we see the pattern it should be easy to predict when happens when you reward professors, postdocs, and grad students for producing many publications. The rewards are significant: a long CV can get a candidate in the door for a job interview or permit an assistant professor to keep her job during a tenure evaluation. Obviously numbers aren’t everything, but they matter a lot.

It’s true: there is a large number of low-quality publications being produced. I end up reviewing maybe 100 papers per year, and quite a few of them are just bad (I won’t try to define “bad” here but I took a stab at this earlier). I make an effort to be selective about the things that I review, turning down many requests to review journal papers and a few invitations to be on program committees each year.

The recent Chronicle article We Must Stop the Avalanche of Low-Quality Research says that the main costs of the avalanche are increasing the reviewing load, raising the bar for younger researchers, encouraging shallow inquiry, creating a difficult-to-surf ocean of results, and hurting the environment. There’s some truth to all of these, but I’m not sure they represent the main costs. For one thing, the peer review system is fairly effective at weeding out the bad stuff, and it is continuously improving as we adapt to the increasing paper load. For example, many conferences now have multiple rounds of reviewing to avoid wasting reviewer time on the worst submissions. As for the ocean of results, it is effectively tamed by Google. Suresh makes some similar points in his recent blog post where he equates the paper flood with spam email. It’s not a terrible analogy but it makes it easy to overlook some the true costs of the perverse inventive to maximize publications.

It isn’t the flood of bad research that’s the problem, it’s the secondary effects of this flood. First, promising young academics learn and propagate a misleading view of the research process, reducing the likelihood that the high-value, high-impact, once-a-decade results will be produced. To become and remain competitive, talented researchers waste time on short-term results and on spreading results across multiple papers in order to avoid exceeding the LPU by much. Bad research isn’t free: it is produced using grant money, which comes from taxes paid by hardworking citizens. Not only is it unethical to waste this resource, but waste has a high opportunity cost because it prevents useful work from being funded.

Fixing the problem is not so easy because the incentives are hard to change. The fixes offered by the Chronicle article are quite unconvincing when applied to my field. My read of the situation in CS is that little fixing is needed at the top-tier schools: a prolific publisher of worthless results is unlikely to be hired or tenured there. (This is one of the many reasons why the top schools tend to stay there.) On the other hand, the incidence of highly quantitative evaluation of faculty in the next few tiers — schools ranked 5-75, maybe — is significant and troubling.

One of the lessons from Michael Lewis’s fantastic book Moneyball is that if almost everyone is using broken metrics, there’s a tremendous opportunity to scoop up players who are undervalued by these metrics. This is exactly what forward-thinking, non-top-tier research departments should be doing at hiring time. The problem is that identifying these candidates and pushing them through internal hiring barriers is hard work. My guess is that departments who do this will win big in the long run; it is simply inconceivable that a focused practice of hiring paper machines will be a long-term advantage. You can fool the university, the tenure committee, and the awards committee at your favorite conference, but you cannot fool the larger ecosystem where significant research results are those that change how people think about the world, or spawn billion-dollar industries.

Extra credit question: What happens if the amount of grant money a professor earns becomes as important as her publishing record for purposes of tenure, raises, and internal prestige?

Professor Value Added

If you buy a university-level instructor a beer and ask her to tell you how great the standardized course evaluation forms are, you’re likely to get an earful. I’m talking about the multiple-choice forms that students fill out towards the end of each course they take, asking them to assign a 1-5 rating to statements such as “The instructor is well prepared for class” and “Directions for course assignments were clear.” It’s not that these questionnaires are absolutely useless — for example, if an instructor consistently gets low scores there’s probably a problem that needs to be looked into — but rather that the scores are strongly affected by factors other than overall teaching effectiveness. For example, teaching a rigorous class with difficult tests and a lot of homework will hurt the score for a course. Conversely, easy grading and a light workload results in higher scores. There’s more to it than that, but in the first-order analysis if you meet the students’ general expectations and don’t piss them off you’ll be evaluated well. This is all well-known among university-level instructors but it’s just talk; usually nobody can back it up with real data.

It was a pleasure, then, to run across Scott Carrell and James West’s paper Does Professor Quality Matter? Evidence from Random Assignments of Students to Professors. The paper looks for correlations between course evaluation score, student performance in the course, and student performance in follow-on courses. Doing this study right is trickier than it might initially sound because there are typically many uncontrolled factors such as differences in exams or grading policies between instructors and self-selection effects where the students who take follow-on courses may be the ones who liked the course and did well in it. To make their analysis work, Carrell and West used a data set from the US Air Force Academy which has an unusually constrained sequence of courses, standardizes exams across instructors, and (most important) randomly assigns students to instructors.

The results are pretty great. It turns out that course evaluation score is positively correlated with the grades received in a course, but negatively correlated with performance in follow-on courses. Effectively, students punish professors for inducing deep learning that has value beyond the current class and reward them for teaching to the tests. Also, entry-level instructors induce students to perform better in the course being taught, but experienced instructors produce students who perform better in follow-on courses. There’s a lot of food for thought here, but it’s pretty easy to make the inference that it’s a serious mistake to use course evaluation scores as the sole basis for assessing teacher quality.

Self-Checking Projects

Matching students up with research projects is entertaining but difficult. The project has to be at the right level of difficulty, has to fit the student’s time frame, and has to interest the student. If grant money is going to be used to pay the student, the work has to fit into the funded project. This is all pretty basic, but slowly I’ve come to recognize a less obvious criterion that can make the difference between success and failure. Some projects have the property of being self-checking: a student can easily, on her own, check whether the implementation parts of the project are correct. For example:

  • If a tool that is supposed to find bugs in other programs finds some real bugs, we know that it works, at least to some extent (we can determine if a reported bug is real by inspection). If the tool fails to find real bugs, we can always fall back to seeding software with deliberate errors and try to find them.
  • A compiler is highly self-checking because we can ask it to compile large existing programs and see if the resulting executables work as expected.

On the other hand:

  • Projects whose top-level result is numerical tend to be very hard to check. I once heard a horror story from a professor who had published a paper based on a student’s work showing that his new algorithm performed much better than the previous best one. Unfortunately, the paper was totally wrong: the student had made an error while implementing the competing algorithm, hurting its performance considerably.
  • A formal program verification project is not self-checking because it’s very easy to accidentally prove the wrong properties, or even vacuous properties.

Of course, not all projects need to be self-checking. It’s just that those that aren’t require significant extra attention from the advisor. Additionally, there are some students who will just fail to make progress on a project that isn’t self-checking. It is not exactly the weakest students who do this, but students who somehow lack a strong innate sense of technical right and wrong. In summary, it is useful to ask whether a research project is self checking, and to use the answer to this question to help decide whether the project is a good match for a particular student.

The self-checking principle can be applied to undergraduate education. A course I taught several years ago came with automated test suites that permitted students to determine their score for each homework assignment before turning it in. This approach makes students happy and it is a good match for very difficult assignments that people would otherwise fail to complete. However, when designing a course myself, I have tended to create easier assignments that are not self-checking based on the idea that this encourages students to ask themselves whether they’ve done the right thing–an important skill that is unfortunately somewhat under-developed in CS undergraduates. The weaker half of the students hate this, but it builds character.

White Baldy

White Baldy, on the ridge between the Red Pine and White Pine drainages of Little Cottonwood Canyon in Utah’s Wasatch Range, is an infrequently visited 11,000′ mountain with no really easy routes: its east, west, and north ridges are all messes of bus-sized boulders. Bill and I decided that if we were ever going to climb this mountain, it would be via a snow climb of its broad north face. This face could be a fun scramble in summer, but getting to it would necessitate an hours-long session of boulder hopping in upper Red Pine. Better to just walk on top of it all.

On June 21 we hiked not-speedily to Red Pine Lake, one of the prettiest locations in the Wasatch. The snow was very firm and the small patch of open water on the lake had accumulated a skin of ice overnight. We had good walking to Red Pine’s highest bowl at around 10,200′ and from there the climbing began. The problem with this north face is that it doesn’t have any really pleasant couloirs; as the slope became steeper, there were always sharp rocks sticking out of the snow below us–not so fun to imagine falling into them. As the angle crept past 30 degrees we started running into patches of icy crust where my light mountaineering boots were failing to kick very good steps. With about 600′ to go we chickened out and turned around; putting on crampons (which we hadn’t brought) or waiting an hour for the snow to soften would have also been solutions, but neither of us was super invested in summiting.

We traversed over to the west ridge, stopping to do a bit of self-arrest practice along the way, including a few of the always-frightening backwards / headfirst falls. I hadn’t practiced stopping fast slides for a few years so this was good review. We had lunch looking into American Fork Canyon. It was a great day: sunny and warm in the lee of a boulder, but surprisingly cold in the wind–my hands started to get numb while I was taking pictures. On the way down the snow was getting sloppy but the partially broken-down snow bridge over the Red Pine stream held up fine. Overall, it was an excellent spring snow climb.

Here’s a 360° panorama with White Baldy in the middle.


Starting out as a professor can be intimidating: there’s a lot of work to do and little indication about how to prioritize it. Advice for assistant professors sometimes mentions the 40-40-20 guideline, which states that one should spend 40% of one’s time working on research, 40% on teaching, and 20% on service. This always seemed skewed to me. However, I finally realized that it’s perfect advice, but it only applies to a regular 40 hour work week. The other 40 working hours need to be spent doing research. (This is closely related to one of my standard stupid jokes: The great thing about being a professor is that I get to choose which 80 hours a week I work.)