AI autocomplete not only alters your writing process; it also influences your thought patterns.
AI-driven writing tools are becoming a staple in our emails and smartphones. Recent research reveals that biased AI suggestions can significantly impact user beliefs.

wildpixel via Getty Images
Autocomplete features are increasingly becoming a common tool for writing online, often seen as a “useful” but occasionally frustrating aid. These tools leverage artificial intelligence to provide suggestions for text inputs in emails, surveys, and various other platforms.
Intended to improve efficiency, many users report that evaluating and modifying suggested content often takes longer than composing original text. Additionally, these AI applications can also shape how you communicate. For example, an AI writing assistant may make your text sound more formal—though sometimes at the expense of creativity. A recent study by Cornell University researchers further emphasizes that AI autocompletes can even influence your mindset.
“Autocomplete is now ubiquitous,” said Mor Naaman, a professor of information science at Cornell. His team’s research builds on earlier studies, such as one published in 2023, which indicated that brief AI suggestions could affect opinions. With the increasing use of these technologies, Naaman noted, “it becomes evident that biases ingrained in AI interactions are a very real concern.”
On Supporting Science Journalism
If this article resonates with you, consider supporting our acclaimed journalism by subscribing. Your subscription not only helps sustain impactful narratives about groundbreaking discoveries and vital ideas that shape our world today.
In the study, participants completed an online survey regarding various contentious social and political matters. Some received biased AI suggestions for their responses. For instance, those asked about the legality of the death penalty might encounter an AI prompt advocating against it.
Remarkably, participants exposed to biased AI suggestions reported opinions that aligned more closely with the AI’s recommendations, even if they opted not to utilize any suggested text. Overall, those who viewed the biased suggestions adjusted their views toward the perspectives represented by the AI.
Interestingly, participants generally did not recognize the potential bias in AI suggestions or the alterations in their viewpoints during the study. Advising participants about the possibility of misinformation did not diminish its persuasive effects either.
“Even after informing participants of potential bias before and after the session, their attitudes still shifted,” Naaman noted. “The warnings did not mitigate the influence.”
It’s Time to Stand Up for Science
If you found value in this article, we invite you to show your support. Scientific American has championed science and innovation for 180 years, and today is one of the most critical moments in this extensive history.
As a long-time subscriber, I can attest to how Scientific American has profoundly influenced my worldview since the age of 12. The publication never fails to educate and inspire wonder about our mesmerizing universe. I hope it has a similar effect on you.
By choosing to subscribe to Scientific American, you help ensure that our stories focus on significant research and discoveries. Your support is vital for covering the challenges facing labs nationwide and for empowering both emerging and established scientists during a time when science’s importance is often underappreciated.
In return, you’ll receive essential news, engaging podcasts, informative infographics, must-read newsletters, visually appealing videos, thought-provoking games, and top-notch writing from the science community. You can even gift a subscription.
Now is the time for us to collectively uphold the values of science. I hope you’ll join us in that mission.