top of page

Microsoft’s 2016 AI paper tops 21st-century citation rankings

  • Voltaire Staff
  • 1 day ago
  • 2 min read

Image Source: Unplash
Image Source: Unplash

A groundbreaking paper from Microsoft on image recognition now holds the distinction of being the most-cited academic publication of the twenty-first century.


The paper's rise is aided in a great manner by the galloping pace with which AI has developed in the subsequent years, to the extent that the field has now come to establish stark supremacy in citation rankings across disciplines.


"AI papers come with natural advantages in the citation stakes," Geoffrey Hinton, a computer scientist at the University of Toronto and Nobel Prize co-recipient, said. "Papers in this area are relevant to a huge number of fields, and the twenty-first century has seen extremely rapid progress and a large volume of papers," he said, according to Nature.


The Microsoft paper, 'IEEE Conf. Comput. Vis. Pattern Recognit' followed a wave of innovations stemming from the deep-learning revolution. It built on earlier neural network architectures and set a benchmark in image processing, catalysing wide adoption across academia and industry.


Hinton himself co-authored one of the movement’s pivotal works in 2012. The AlexNet paper, now the eighth-most-cited of the century, introduced a deep convolutional neural network that dramatically outperformed its competitors in an image classification competition. 


A 2015 follow-up, which introduced U-nets—an architecture requiring less training data—is currently ranked twelfth. Its co-author Olaf Ronneberger, now at Google DeepMind, recalls that the paper was nearly rejected for "not being novel enough." Ironically, it has since become foundational to image generation in diffusion models.


Another transformative moment came in 2017, when Google researchers published 'Attention is all you need,' which introduced the transformer architecture. Ranked seventh, the paper laid the groundwork for large language models like ChatGPT by pioneering the self-attention mechanism, allowing models to better prioritise relevant information.


Other highly cited AI works include the ImageNet paper from 2009 (ranked 24th), which provided the massive training dataset that fuelled early classification breakthroughs. 


The sixth-most-cited paper, 'Random Forests' by the late Leo Breiman and extended by statistician Adele Cutler, remains popular due to its simplicity and robust performance. 


According to Nature, the open-source aspect of many AI methodologies has further amplified their citation reach, offering researchers and developers accessible, adaptable tools. However, tracking the true citation impact of these papers remains a challenge.


Many influential AI papers were first released as preprints, ahead of peer review. This practice, while accelerating dissemination, complicates citation aggregation. 


"Most commercial databases either don't track preprints or don’t try to merge their citations with those of the final peer-reviewed article," Paul Wouters, a retired scientometrics researcher at Leiden University, said.



Comments


Stay up-to-date with the latest news in science, technology, and artificial intelligence by subscribing to Voltaire News.

Thank You for Subscribing!

  • Instagram
  • Facebook
  • Twitter

© 2023 by Voltaire News Developed & Designed by Intertoons

bottom of page