Tuesday, June 05, 2007

Ranking

My gmail RSS monitor has alerted me that the June issue of CACM includes the article Automatic and versatile publications ranking for research institutions and scholars by Jie Ren and Richard N. Taylor. In that article, the authors briefly discuss the role of rankings of institutions and individuals in modern academic life, present the main criteria used in existing rankings, and introduce their own fully automatic ranking system, which is available here.

I have not played with the authors' system yet, but the tables they present to substantiate its quality make for some interesting reading. The top five computer science departments in the US, according to their ranking, are as follows.

  1. MIT
  2. University of Maryland, College Park
  3. CMU
  4. Georgia Institute of Technology
  5. Stanford
UC Berkeley is "only" 10th (something that I find surprising), Princeton is 22nd and Harvard is 41st (which I find less surprising).

In Software Engineering, as an Italian abroad I am glad to see the Politecnico di Milano in 8th place and the University of Bologna in 41st. Amongst individual researchers in SE, Paola Inverardi is ranked 17th. Perhaps interestingly, the SE rankings given in the article differ significantly from those obtained by others in a previous ranking exercise.

This is what the authors have to say.

Our ranking is significantly different from the JSS ranking. The second column in Table 2 shows that only two of the top 15 institutions from the JSS ranking are among the top 15 of our ranking. The second column in Table 3 shows that only two of the top 15 scholars from the JSS ranking are among the top 15 of our ranking.

Two policy disparities probably contribute to the difference. First, we included two conferences in our ranking that the JSS ranking did not consider. Secondly, our ranking and the JSS ranking selected different journals and these journals contributed scores differently. The JSS ranking heavily relies on papers published in itself and the journal Information and Software Technology. It also includes a magazine, IEEE Software. The JSS ranking receives almost no influence from ACM Transactions on Software Engineering and Methodology. This study illustrates that the framework can produce dramatically different results when used with different policies, even for the same field.


What does this indicate? Automatic rankings will be very useful in the future, but it will be all the more important to specify clearly how such rankings are obtained. In particular, when evaluating the results of such ranking exercises, I'd really like to know what publication outlets were considered, what weight they were given, and what weight was given to multi-authored papers. I do not see why the author of a multi-authored paper should necessarily receive a fraction of the points awarded to the paper. Is writing a paper with a co-author less work than doing it alone?

1 comment:

Anonymous said...

If it puts College Park number 2 the system gets my vote. Then again, I hold the dubious distinction of being the only student to willingly drop the CS major there at the height of the dot-com boom because I had better things to do over in the math department.

So I might be a bit biased. :D