dc.description.abstract | University rankings widely affect the behaviours of prospective students and their families, university executive leaders, academic faculty, governments and investors in higher education. Yet the social science foundations of global rankings receive little scrutiny. Rankings that simply recycle reputation without any necessary connection to real outputs are of no common value. It is necessary that rankings be soundly based in scientific terms if a virtuous relationship between performance and ranking is to be established, the worst potentials of rankings are to be constrained, and rankings are optimised as a source of comparative information. This article evaluates six ranking systems, Shanghai ARWU, Leiden University, QS, Scopus, Times Higher Education and U-Multirank, according to six social science criteria and two behavioural criteria. The social science criteria are materiality (rankings must be grounded in the observable higher education world), objectivity (opinion surveys should not be used), externality (ranked universities should not be a source of data about themselves), comprehensiveness (rankings should cover the broadest possible range of functions), particularity (ranking systems should eschew multi-indicators with weights, or proxy measures) and ordinal proportionality (vertical distinctions between universities should not be exaggerated). The behavioural criteria are the alignment of the ranking with tendencies to improved performance of all institutions and countries, and transparency, meaning accessibility to strategy making designed to maximize institutional position. The pure research rankings rate well overall but lack comprehensiveness. U-Multirank is also strong under most criteria but stymied by its 100 per cent reliance on subjective data collected via survey. | es_ES |