Reading deep into university ranking

This is the original unedited text of an article that was published in Focus Malaysia on Sept 28, 2013 under my moniker of Plantcloner.

 

Each year when the “season” for various world rankings of universities descend upon us, there would be knee-jerk reactions if Malaysia’s “top” universities do not perform well. Institutions that“performed” better in the latest ranking will have lots of explanation for their “successes” and if they do not do well in one ranking but “so-so” in the next, we will also be given plenty of coverage in the media too.

However, I wonder if anyone bother to ask this simplest of question: are we comparing apples with oranges when we compare across different rankings? Did the Times Higher Education’s World University Rankings look at similar criteria as Quacquarelli Symonds’s QS World University Rankings? How about U.S. News’s Best College Rankings (of U.S. institutions) compared with Forbes America Top Colleges List? How come some universities “did” reasonably well under one ranking system but not another?

Unless we know the methodology of each of the ranking systems and “adjust” the data accordingly, we are always comparing apples with oranges as no two ranking systems adopt the same methodology. Even if they look at the same criterion, the weighting given and the manner in which the data has been compiled and analysed vary greatly between ranking systems. However, one thing is clear, the same top 10 to 40 institutions usually appear in the same range in most of the ranking systems. But this does not tell us if a particular ranking system captures data relevant to the key stakeholders: the students and parents. Ranking systems that award similar rankings to these top 40 institutions could just be looking at the same criteria or these 40 institutions have the same sort of features that result in favourable scores that resulted in the high rankings.

I have evaluated four ranking systems mentioned earlier and unsurprisingly, Times Higher Education and QS World (and to a lesser extent, U.S. News) have many similar criteria in their respective ranking systems. This could be the key reasons for many institutions appearing in similar ranking positions across these two systems.

Only Forbes places any relevance in asking students to score their learning experience and satisfaction. Forbes also is the only one that uses several evaluation channels to provide some measure on “returns on investments”such as the salary of alumni and alumni who have made a name for themselves and appear in Forbes lists (Power Women, 30 Under 30, CEOs on the Global 2000), plus Nobel and Pulitzer winners, Guggenheim and MacArthur Fellows, those elected to the National Academy of Sciences and winners of an Academy, Emmy, Tony or Grammy award etc. Hence Forbes puts some weight in how “powerful” and influential the alumni of an institution are/were in society as a measure of their successes. Forbes also gives significant weighting on the ability of student to service their debt as key criteria and hence covers the employability and salary commanded by the graduates via such a measure.

I think a good ranking system will answer four basic questions that a student (or parent) needs to consider:

(a) How easy for me to get accepted to this institution? The relevant entrance requirements for a range of fields of studies need to be evaluated. Perhaps SAT or CAT scores for the North American college systems, GCE “A” levels and other pre-university qualifications need to be provided and evaluated.

(b) What percentage of applicants are accepted each year? This gives a good indication on how popular an institution is and how stringent is its selection process.

(c) How much will it cost? Are financial assistance provided for high achieving students? This give a measure of how much it will cost to finance a student through his/her studies. The provision of financial assistance to high achieving students is a measure of how well an institution is endowed .

(d) How fast does the average graduate gets a job after graduation and what is the average starting salary for fresh graduates of this institution? What percentage of graduates get jobs in their areas of studies?

Most of the ranking systems cover (a) to (c) in some measure of detail but only Forbes covers (d) which is one of the most important reasons for a student to go to university: to be able to secure a good job and build a career in his/her chosen field .

Another measure of academic quality is the percentage of fresh graduates that progressed to postgraduate studies. However a more precise measure is the percentage of PhD graduates that secured postings to carry out postdoctoral studies within 6 months of graduation. This shows how “popular” an institution’s PhD holders are and how the other institutions rate the quality of the research training of that institution. None of the ranking systems evaluated cover this which is a very good direct peer-review of an institution.

A good ranking system, aside from measuring the outputs (i.e. the quality of the graduates) also needs to provide a measure of students’ happiness with their lives while at college. Only Forbes provides a good measure of this aspect to rate the facilities, the teaching and the overall experience of the students.

Apart from the various criteria used by a ranking system, the manner in which the data are collected requires detailed evaluation. Some like Times Higher Education and QS World rely heavily on scoring of the academic reputation of institutions by their peers or professional students recruiters / counsellors. In addition, heavy weighting is often given to this criterion. To me this serve no purpose at all as it is highly subjective without any concrete data that can be used by the scorers to objectively assign a score to an institution. Measuring the research output of institutions (such as the number of papers published per PhD thesis approved; the average number of citations received by each teaching staff per year, the percentage of PhD holders getting postdoctoral research jobs etc. ) serves as a better yardstick.

Some ranking systems like QS World use “employer reputation” (how employers rank the institutions in terms of the quality of their graduates) which covers the employability aspect well. This is rather a very subjective way of measuring reputation. A more objective measure would be to rank or score an institutions on what percentage of their fresh graduates are hired each year by multinationals, Fortune 500 companies, 100 largest companies etc. in the country. The proof of the pudding is in the eating. We need to know if getting a degree from an institution will allow the graduate to a) get a job readily, b) join a network of powerful and influential alumni (to enhance his/her career development prospect); c) get a job that pays her/him sufficiently to allow the servicing of any debt incurred in the process of studying for the degree. In fact way back in 1999 when I first visited New Delhi on business, I was amazed to read advertisements placed by private higher education institutions in major newspaper that read something like this: “90% of our graduates found placements in multinational and major Indian corporate giants such as XYZ. with starting pay of ZZZ rupees per month”. There was (and still is) intense competition in the market and these private institutions of higher learning were trying to distinguish themselves by getting job placements for their graduates with major employers that paid well. I feel that any university ranking systems that ignore this crucial criterion is not giving the readers an accurate picture of the “value” generated by each institutions for its graduates.

Another measure of an institution’s reputation is in its industrial linkages (and hence the ability to place its graduates in industry). This could simply be the amount of research funding per faculty member that it receives from industry. Related to this, how well an institution is perceived to serve industry can be measured by the value that research and development activities of this institution bring to industry. This can be measured indirectly by the value of commercialization per faculty member per year that an institution has carried out. This also measures the innovative capability and entrepreneurship of an institution.

This is not a comprehensive evaluation of the different university ranking systems. I merely demonstrated that with a bit of drilling down of the methodology we can discover a lot more about these ranking systems and their relative shortcomings. One should therefore, when reading reports of these ranking systems take a heavy pinch of salt. We can use them as rough yardsticks to gauge the “reputation” of an institution at best but one should not read further beyond this. We should be reminded that, no matter how high an institution is ranked by a system, if its graduates are not able to secure jobs in the relevant fields, then there is a disservice being performed by that institution to its stakeholders.

Footnote:
Plantcloner has evaluated many institutions in Asia Pacific when he served as the regional quality manager of a UK-based publishing-education company. He believes that an institution’s reputation is only as good as the graduates that it has produced.