The annual release of the Stanford University professor John Ioannidis’s list, identifying the world’s top 2% scientists, consistently generates significant buzz within the scientific community and academic institutions. Each year, for weeks following its publication, numerous organizations—from well-known universities to lesser-known colleges—proudly announce the inclusion of their Indian scientists on this esteemed list across various media platforms. They celebrate this recognition as a testament to their scientific excellence, often claiming that such achievements wouldn’t be possible without their supportive research environment.
The 2025 list, unveiled in September, featured approximately 230,000 scientists globally, meticulously filtered from an initial pool of 22 million. This exclusive group, which even included several Nobel laureates, remarkably contained 6,239 scientists from India, a number that has been consistently rising in recent years.
Questioning the “Quality Link”
Interestingly, the top ten Indian scientists on the 2025 list were ranked between 288 and 952. These individuals were affiliated with institutions like Muthayammal Engineering College (Tamil Nadu), University of Petroleum and Energy Studies (Uttarakhand), Thapar Institute of Engineering & Technology (Punjab), Indian Institute of Toxicology Research (Uttar Pradesh), Sikkim University, National Institute of Mental Health and Neurosciences (Karnataka), Saveetha School of Engineering (Tamil Nadu), Government Degree College Pulwama (Jammu & Kashmir), and S.V. National Institute of Technology (Gujarat).
This pattern was quite similar in 2024, where the top ten Indian scientists, ranked from 163 to 1568, predominantly hailed from less prominent institutions.
What’s truly astonishing is that six out of the seven science Nobel laureates on this list were ranked much lower, between 1,373 and 28,782—significantly below even the lowest-ranked Indian scientist in the top ten. To suggest that the lowest-ranked Indian scientist in this elite group outperformed almost all Nobel laureates on this specific list is a profound understatement.
However, whether this vast disparity genuinely reflects differences in research quality is a crucial, separate question. This disconnect is further highlighted by the observation that these highly ranked Indian scientists are often not associated with the country’s premier research centers known for outstanding R&D. To truly grasp this discrepancy, we must first understand the fundamental nature of scientific research and how it is typically assessed within the current academic framework.
Building on Collective Knowledge
Scientific inquiry begins with a scientist observing something intriguing and posing a question. They then formulate a hypothesis and design experiments to test it. This process might involve developing new tools, collaborating with other scientists to gain diverse perspectives, collecting data, and analyzing evidence to draw conclusions. These findings are subsequently documented in reports, commonly known as papers, which undergo rigorous peer review before being published in scientific journals. The names of the contributing scientists, or “authors,” are prominently featured on these papers.
A significant portion of modern scientific research is built upon the foundational work of others. Therefore, when authors of a new paper find older research relevant to their current work, they cite it. This act of citation serves as an acknowledgment, linking their work to the broader chain of knowledge. Each time a paper is referenced by another, it accrues one citation.
Traditionally, a scientist’s work is often evaluated by the number of citations their papers accumulate. The underlying, somewhat naive, assumption has been that scientists only cite high-quality papers, leading to the belief that a paper’s citation count directly indicates its impact. However, this assumption doesn’t always hold true.
Understanding Science Metrics
Professor Ioannidis compiles his list using Scopus, a comprehensive global database of published research. Scopus is owned by Elsevier, a publishing giant frequently criticized for capitalizing on academia’s “publish or perish” culture to generate profits comparable to tech behemoths.
For his analysis, Professor Ioannidis devised a composite metric called the ‘c-score’ for each scientist in the Scopus database, ranking them in descending order based on this score.
The c-score equally weighs various parameters, including the total number of citations, the h-index (a metric linking citation count to the number of papers published by a scientist), the overall number of papers, author order, and co-authorship details.
A significant criticism is that the ranking amalgamates scientists from numerous diverse fields and sub-fields, making comparisons inherently problematic—akin to comparing apples and oranges. Furthermore, it’s important to note that, contrary to popular belief, Stanford University neither participates in nor officially endorses this ranking process. It remains Professor Ioannidis’s independent academic initiative.
Nobel Prize vs. C-score: A Critical Look
To grasp why Indian scientists from lesser-known institutions might outrank Nobel laureates on this list, a closer examination of the c-score is essential. While Professor Ioannidis and others propose that it offers a comprehensive view of a scientist’s impact, it has considerable limitations. The c-score disproportionately favors papers where the scientist is the first, single, or last author, assuming these positions always signify major intellectual contribution—a practice that varies widely across different fields. It also fails to account for diverse citation practices between disciplines, assumes equitable coverage of all disciplines by the Scopus database (which it doesn’t), and largely overlooks qualitative impact.
Consequently, a scientist’s c-score can become detached from the actual scientific content, quality, validity, and societal contribution of their work. In contrast, despite their own inherent complexities, the rigorous process for selecting Nobel Prize winners meticulously considers the very qualitative aspects that the c-score often misses.
Like many other metrics that reduce the rich, multi-dimensional nature of scientific research into simplistic numbers, the c-score is susceptible to manipulation. Individuals might, for instance, pre-arrange to cite each other’s papers regardless of actual quality or relevance. This potential for “gaming the system” is evident in the extraordinarily high productivity rates—sometimes 1-2 papers per week—seen among many highly ranked Indian scientists. The rankings also currently lack mechanisms to penalize or account for retracted papers due to misconduct, by not incorporating such factors into the c-score formula. Moreover, the Scopus database itself includes publications from many questionable journals and publishers with lax ethical standards in research.
Without a thorough understanding of the c-score’s methodology and limitations, one might mistakenly interpret Indian scientists outperforming Nobel laureates on Professor Ioannidis’s list as evidence of unrecognized brilliance. Such a conclusion would be misguided. Ultimately, like many other quantitative measures, the c-score is largely a “vanity metric.” Researchers and their institutions should instead prioritize conducting high-quality research and fostering an environment conducive to genuine scientific inquiry, rather than merely chasing numerical rankings.
By Swaminathan S., retired professor, BITS Pilani – Hyderabad, and former scientist at ICGEB, New Delhi.