Predicting Grad School Performance

I enjoy going out drinking with my colleagues, although it only seems to happen a few times a year. It should come as a surprise to nobody that professors are natural bullshitters and people always have good stories: nearly destroying a ticket booth at Alta while doing avalanche control work, barely sub-nuclear pyrotechnic displays out in the desert, skiing across Siberia during the good old days of the USSR, things like that. It is perhaps relevant that one of my colleagues, Erik Brunvand, is the son of the guy who popularized the term “urban legend.” We also tell work stories; these are slightly less colorful but this is one of my favorites:

At some point before my time at Utah, some professors got together and decided to bring a few principles to the chaotic process that is graduate admissions. The experiment worked like this.  First, the applications to graduate school from a number of previous years were gathered up and a spreadsheet was created containing every current graduate student in addition to all of the quantitative data from their applications.  This spreadsheet was sorted by various fields such as GRE score, undergraduate GPA, etc.  Then, the resulting lists of names were passed out to the faculty without telling them what the sorting keys were. Each professor was asked to choose which list or lists best sorted the graduate students by ability. Finally, the results were tallied up.  The most important predictor of grad student quality turned out to be whether the student arrived with a MS degree or not, followed closely by social security number and then phone number.  GRE and GPA were not predictors at all.

13 Replies to “Predicting Grad School Performance”

  1. I find this study to be very interesting. I come from an economics and statistics background and have always wanted to get my hands on a data set like this. I was unable to collect data of this kind during my graduate studies, although I tried. Do you have an anonymous data set available for research?

  2. Hi Wesley– Unfortunately this particular data set is long gone. I think the most interesting and valuable part of this data is the outcomes: the rankings of student quality. I’m not aware of departments that track this in any systematic way, though many departments do keep track of facts like which PhD students have obtained faculty positions (in fact this is often publicly available since departments like to brag about it). In contrast, the application data is pretty routine: almost any department anywhere that awards graduate degrees will have mountains of this. These days it is all electronic so really you just need to convince them to share it with you. However there would no doubt be some hurdles due to privacy concerns, and it’s even possible that they cannot legally share the data without consent of the applicants.

  3. Not very meaningful, if they only looked at the admitted students. If all students are close to the borderline and the factors are being weighed correctly, you’d expect no correlations.

    P.S. sorry if this isn’t a jump-in-and-comment blog. I found it through google reader/

  4. I also found your blog through google. There are tons of studies that show that the GRE predicts academic performance, and does so pretty well. I’d be happy to point out some of them to you, if you want.

  5. Hi Dan– I’m sure, but why would we want to let facts get in the way of a good bar story?

    Dspeyer– Ok, but it’s pretty hard to evaluate students who weren’t admitted (or who were admitted but didn’t accept the offer).

  6. I should add that one reason this story appeals to me is that I’ve done graduate admissions four or five times and never found GRE or GPA to be a helpful discriminator between applicants.

  7. GRE and GPA were good predictors of academic ability, i.e., the ability to do well in the classroom and get good marks. I believe studies back this up. However, I don’t think they are good predictors of the ability to excel at doing research, which is important for really excelling in graduate school.

  8. Steven– I think this is correct. As far as I know, the only good predictor of research ability is a previously demonstrated capability for doing research, which is why undergrads who have worked on a research project have such an advantage in graduate admissions. Personality traits such as curiosity, perseverance, intelligence, and an ability to ask new and difficult questions are all necessary but not sufficient conditions for becoming a good researcher.

  9. Actually, the GRE predicts both research productivity and citation counts. It also predicts: Graduate GPA, comprehensive exam scores, and faculty ratings.

    Forgive my “passion” for the GRE, however I think it’s important to correct inaccuracies (I am an industrial-organizational psychologist). Standardized testing is under attack in the US and most of the arguments against them are just plain false.

    John, if I may ask, can you elaborate on your comment number 6?

  10. Hi Dan- Ok I’m interested, can you post a few pointers?

    Sure, I can explain a bit more about my lack of confidence in GRE scores. In CS most serious applications feature very high math and analytical scores and middling to good verbal scores. There is simply not a huge variance in scores among applicants who we might actually accept. Sure, a score of 300 on the math section will disqualify an application, but do you think there’s much predictive power in the 600-800 range? Almost certainly not.

  11. I actually think there is a huge difference between a 600 and a 800. A 600 quant is around the 50th percentile, 800 must be around or over the 80th percentile. Even variance at the top of the distribution counts. For computer sciences it would make sense to screen out everyone below 700-750 on the Quant GRE. I’m not sure how well the verbal predicts for hard sciences, if memory serves me right not too well.

    What the GRE offers is standardization and as a result fairness. GPA can have a different meaning depending on the school (4.0 community college vs. 4.0 at a top school). GRE is the same for everyone. Letters of recommendation and research experience have very little – if any – predictive power. So from a scientific, evidence based perspective, I would use the Quant GRE and the subject GRE as the most important predictors (weighted more heavily) in the admission process. Throw in the GPA, and you narrowed the pool of applicants considerably. After that one can look at research experience, fit with the professor/department and other subjective indicators. Structured interviews (asking all the applicants the same questions) can also be used.

  12. Firstly, I think GRE is not the most important factor to distinguish applicants. For Chinese, GRE is just a piece of cake to get 790 and the full score, it is like an regular easy exam of MIDDLE SCHOOL.
    Secondly, as for the previous research project experience, as a student coming from the top ranking universities (undergrad and graduate), and the knowledge from my friends, seldom undergraduate students have the opportunities. It may be not because their lack of abilities, but for the lack of platform and resources, most important, the educational style in China. But when undergraduates become master students, there maybe some projects available, but as far as I see, it can hardly be said a “research” experience…It is just software engineering job, without involving much about the abilities of truly doing good research in the future. But if I were a professor, comparing the undergraduate and graduates, I would prefer master students, especially in CS, it is of high probability that the former group’s programming skills are not as good as the latter. Besides, their perspectives (or knowledge) are not as wide open (specialized as well )as the latter group. BTW, these thoughts are based on the undergraduates I encountered at my previous CS department as well as the ones mentioned by my friends who are also in U.S graduate school.

  13. One of my lecturers did a research project on undergraduate entry selection for engineering. On his results, the best predictor for academic success in a 4-year engineering course was the un-adjusted prediction from the high-school physics teacher.
    Of course, we never used that: it is relevant to remember that the university admission interview was invented as a method of keeping Jewish students out of Ivy League colleges.
    But I always think of it when I see teachers arguing against external exams. Yes, an external exam result tells teachers nothing they don’t already know about their students: No, communicating with the out-going teachers is not the purpose of examination.

Comments are closed.