Categories AI

Why Science Is Failing: The Real Issues

Participants at Intel’s Artificial Intelligence Day. (Photo by Manjunath Kiran/AFP.)

Recent research has raised important questions about the impact of artificial intelligence (AI) on scientific progress. A study published last month in Nature examined 41 million research papers in the natural sciences, revealing a paradox that could shake the confidence of those who believe AI will transform scientific discovery. While it is true that scientists utilizing AI tools produce three times as many papers and garner nearly five times the citations, the overall breadth of scientific subjects explored diminishes by almost 5 percent. Moreover, the collaboration between researchers sees a decline of 22 percent. The tools that accelerate individual productivity seem to be narrowing the scope of scientific inquiry as a whole.

This isn’t an isolated observation. A study released in December in Science assessed over two million preprints and found a similar trend: the use of large language models (LLMs) is linked to an increase of 36 to 60 percent in manuscript submissions. However, papers produced with AI assistance tended to have a complexity that correlated with a lower likelihood of publication, contradicting historical patterns. This suggests that researchers might be producing a higher volume of superficial work, presented in sophisticated language. Studies across other domains also indicate a “homogenizing” effect, where AI-generated content, while polished, resembles that of other authors closely. For example, a study analyzing over 2,000 college admissions essays found that human-written essays offered more unique ideas than their AI-generated counterparts—a trend that widened with more submissions.

These insights collectively highlight a concerning trend. While AI may enhance productivity in terms of quantity, it seems to compromise both the quality and diversity of scientific outputs. There may be quicker publications, but they do not necessarily lead to significant breakthroughs.

This dilemma reflects a growing trend within the scientific community. Promises from AI proponents suggest that these technologies will tackle monumental challenges like finding a cure for cancer or extending human lifespan, thereby positioning AI as the potential savior of scientific inquiry. However, the reality is proving to be more intricate. Rather than accelerating science, AI may merely optimize researchers to succeed within a dysfunctional reward system.

As a neuroscientist affiliated with a biomedical research institution, my focus on AI and cognition involves collaborating with scientists from various fields to enhance communication of their findings. I’ve experienced the rigid dynamics of scientific research firsthand. Researchers face relentless short-term pressures to secure funding through grants with a low success rate of about 10 percent. This environment generates immense pressure to constantly deliver results, leaving researchers chasing the next publication or grant application. Institutions further complicate matters by using metrics such as publication counts and citation numbers as indicators of success, providing a skewed view of genuine scientific advancement. Meaningful progress is often too complex to measure accurately during tenure evaluations.

The logical response to these pressures is risk aversion. Researchers realize they have one chance to demonstrate their research’s worth before tenure evaluations, making it unwise to pursue uncertain or larger ideas. Consequently, many scientists gravitate toward safe, incremental studies that yield quickly publishable results, even if they do not contribute significantly to knowledge advancement. A Pew survey revealed that 69 percent of American Association for the Advancement of Science members believe an emphasis on quick results unduly influences research direction.

When AI is introduced, it exacerbates these challenges. While AI excels in processing data and recognizing patterns within existing datasets, its primary advantage lies in enhancing what has already been done. True scientific advancement arises not from mere efficiency but from innovative insights and theories. The authors of the Nature study emphasize that the history of major discoveries often stems from groundbreaking interpretations of nature, rather than just refined analysis of existing information.

The prevalence of AI-driven research focused on data-abundant topics is not surprising. AI requires substantial datasets to operate effectively, naturally leading to a neglect of important questions lacking sufficient data. Thus, scientific exploration influenced by AI risks resembling the “lamppost problem”—searching for answers only where the light is brightest, rather than in possibly obscured areas of knowledge.

Some researchers have cautioned against entering a “scientific monoculture,” where reliance on identical AI tools and datasets converges researchers’ questions and methodologies. By endorsing AI as an objective assistant intended to mitigate bias, scholars may unwittingly place unwarranted trust in these tools, assuming they grasp more than they do, simply because AI outputs appear confident and coherent from data to which scientists may have only limited exposure.

This does not imply AI is devoid of utility in science. It remains beneficial to individual researchers. However, the broad impediment to scientific progress lies not with technology but within the organizational structure of science itself. A recent interview with Mike Lauer, a former NIH official, highlighted key deficiencies in what he terms a “fundamentally broken” system: scientists allocate roughly 45 percent of their time to administrative tasks instead of actual research; grant applications have swelled from four pages in the 1950s to over one hundred today; and, alarmingly, the average age for scientists to secure their first significant independent grant has climbed to 45. It’s worth noting that individuals can perform intricate medical procedures long before they are deemed competent to manage their research teams.

The roots of this predicament can be traced back to the biomedical sciences, where the NIH adopted a funding model from the Great Depression, which centers on small, short-term grants—an approach criticized early on for potentially reducing science funding to “a dispensary of chicken feed.” This model represents not only an outdated methodology but one that is fundamentally at odds with the nature of scientific inquiry. The traditional competitive funding mechanism treats researchers as vendors seeking contracts, compelling them to forecast their discoveries and adhere to rigid timelines. Yet science is not like a construction project; hypotheses can fail, experiments can lead to unexpected results, and significant breakthroughs often occur when scientists follow their curiosities down unforeseen pathways.

These organizational flaws and misaligned incentives aren’t glamorous bureaucratic issues, but they won’t vanish simply by arming scientists with faster tools. On the contrary, AI could exacerbate the situation, ramping up output without reforming the incentive structures that undervalue novel ideas in favor of an overwhelming quantity of incremental studies.

While AI can indeed aid scientific endeavors—as evidenced in areas from protein biology to nuclear fusion—these breakthroughs arise from specific applications of AI designed to tackle defined scientific challenges. The sweeping increase in AI utilization, as highlighted in the Nature study, points to researchers using data-driven and language technologies to expedite existing practices. Unfortunately, the findings from the Science study suggest that this acceleration may lead to a proliferation of lower-quality manuscripts disguised as polished contributions. The true progress of science accompanies not only the resolution of established dilemmas but also the formulation of new questions and challenges. Thus, the underlying organizational issues that shape the manner in which most scientists employ AI technology remain the real bottleneck.

AI companies, funding bodies, and policymakers seem to approach AI as a miraculous catalyst—a means to inject speed into the scientific process. Yet, as a recent analysis suggests, this perspective is akin to adding more lanes to a highway while the actual slowdown stems from an obstructing tollbooth. The pertinent question is not how to create additional pathways but rather the reason the tollbooth exists in the first place.

Tim Requarth is the director of graduate science writing and a research assistant professor of neuroscience at the NYU Grossman School of Medicine, where he studies the impact of artificial intelligence on scientific thought, learning, and communication. He authors “The Third Hemisphere,” a newsletter exploring AI’s implications in cognitive science.

Stay updated with Persuasion on X, Instagram, LinkedIn, and YouTube for our latest articles, podcasts, and events while receiving updates from our talented writing community.

To gain insights like this directly in your inbox and support our mission, please subscribe below:

Leave a Reply

您的邮箱地址不会被公开。 必填项已用 * 标注

You May Also Like