Introduction
The journey of scientific discovery often involves challenging established norms, as seen with Copernicus’s revolutionary heliocentric model. In this article, we explore the implications of deviating from consensus thinking, particularly in relation to the development of artificial intelligence and its impact on our cognitive abilities.
When Copernicus introduced his heliocentric model—placing the Sun at the centre of the solar system rather than the Earth—he faced considerable backlash. The Church resisted this change, as it raised significant theological issues, and various astronomers dismissed it, arguing that a geocentric model more accurately explained certain phenomena.
This resistance occurred despite the fact that heliocentric theories had been explored by ancient cultures, including the Greeks and Islamic scholars, and despite existing empirical evidence questioning geocentrism. What facilitated the West’s eventual shift in perspective was not merely data but a willingness to think outside the box and to question established beliefs.
In theory, a large language model (LLM) could have reached a similar conclusion if it was provided with the right data. These AI models excel at analyzing information and recognizing patterns, enabling them to generate predictive hypotheses and run simulations. However, they can only do this when prompted with the appropriate inquiries.
As these models are trained on vast amounts of text and designed to predict subsequent text, they inherit the prevailing beliefs present in their training data. If the majority of sources support geocentrism, a model trained on that data would likely default to that belief as well. The training process encourages alignment with popular opinions rather than the generation of radical new theories. Moreover, most LLMs are fine-tuned to be helpful and safe, often adhering to expert consensus.
Currently, and it remains a contentious issue whether this will change, an LLM does not possess the intrinsic curiosity necessary to challenge established paradigms. While it can skillfully expand on existing hypotheses and address current challenges, going against established norms—such as geocentrism—requires a form of creative thought that could be termed deviant thinking.
Unfortunately, this form of thinking appears to be in decline. Adam Mastroianni has written an insightful piece demonstrating, with numerous examples, how this trend manifests. He explores various areas, from criminal behavior to the homogenization of brand identities and art.
In this context, deviant thinking refers to the ability to challenge established norms. Mastroianni states, “You start out following the rules, then you never stop, then you forget that it’s possible to break the rules in the first place. Most rule-breaking is bad, but some of it is necessary. We seem to have lost both kinds at the same time.”
He further attributes declining scientific progress to this reduction in deviant thought: “Science requires deviant thinking. So it’s no wonder that, as we see a decline in deviance everywhere else, we’re also witnessing a decline in the rate of scientific advancement.”
Copernicus exemplified a deviant thinker, standing against the established theological and scientific consensus of his era. His ability to analyze data and suggest that “the Earth may not be the centre of the universe,” coupled with the courage to publish his findings despite potential repercussions, demonstrates the hallmark of innovative thinking.
The decline of such thinking may be linked to a broader erosion of critical thought. Effective deviant thinking demands a foundation of critical thinking. In a 2001 essay published in American Educator, the American educator E.D. Hirsch Jr. warned that the rise of search engines and the internet was undermining our capacity for critical thought, a concern raised even before the advent of AI.
Hirsch argued that acquiring knowledge is a prerequisite for understanding new information. He criticized educational frameworks focused solely on skill-building due to the idea that factual data can always be retrieved. “Yes, the Internet has placed a wealth of information at our fingertips. But for us to utilize that information—to absorb it and enhance our knowledge—we must already possess a storehouse of existing knowledge. This paradox has been revealed by cognitive research,” he explained.
He contended that lasting learning, reading comprehension, critical thinking, and intellectual adaptability are grounded in broad, cumulative background knowledge acquired from an early age. Without this foundational knowledge, neither skills nor internet access can compensate for genuine learning and understanding.
A recent MIT study suggests what many people may intuitively recognize: using LLMs may hinder our cognitive abilities. Researchers recorded brain activity across 32 regions using EEG technology, finding that participants using ChatGPT exhibited lower levels of brain engagement compared to those who either used traditional searches or did not use any AI tools.
E.D. Hirsch cautioned that merely teaching skills would not be enough to nurture critical thinking abilities; contemporary evidence suggests that LLM chatbots are further impairing these processes. According to the MIT study, ChatGPT users “consistently underperformed at neural, linguistic, and behavioral levels.” Over time, those using ChatGPT became increasingly reliant on copy-and-paste, often demonstrating diminishing effort in subsequent essays.
This decline in deviant thinking is therefore not surprising. We are not only losing the ability to store factual knowledge—implying a reduced capacity to comprehend new information—but we are also diminishing our capacity for critical thinking, which should compensate for this loss.
It could be argued that rather than losing these cognitive abilities, we are simply offloading them to machines. We first delegated knowledge storage, and now we are transferring the thinking processes. However, this delegation could lead us to become less capable of critical thinking and deviant thought, making us more compliant with prevailing narratives and more accepting of existing power structures.
It is tempting to believe that this has not been the ultimate goal in the development of such technology. As the initial excitement surrounding AI LLM models diminishes, and reality sets in, we observe that their impact on the economy has been relatively limited.
The actual applications for generative AI models remain niche in comparison to earlier expectations. There are indeed some sectors where they serve as transformative tools. Nonetheless, another MIT study revealed that 95% of companies are reconsidering their generative AI initiatives due to the absence of tangible benefits. However, they excel in areas such as surveillance, targeting, content reproduction, and algorithmic manipulation, solidifying control and conformity.
Nonetheless, the central point here is that generative AI is unlikely to yield anything truly innovative; rather, it is set to provide more of the same. While the technology itself may fall short, our increasing homogeneity—“fitter, happier, more productive,” as Radiohead put it—renders us less capable of engaging in deviant thought. Whether this is a positive or negative development remains uncertain, but it undoubtedly leads to a more monotonous existence.
Conclusion
In exploring the implications of both historical and contemporary perspectives on deviant thinking, it becomes clear that fostering creativity and critical thought is essential. As we navigate a world increasingly dominated by artificial intelligence, we must remain vigilant to preserve our ability to question norms and innovate, ensuring a vibrant intellectual landscape.