Categories AI

How Generative AI Influences Student Thinking Beyond Learning

As the concept of artificial intelligence (AI) becomes increasingly prevalent in discussions about higher education, our understanding of its role remains somewhat vague. While many agree on its classification as a tool, tutor, or assistant in academic contexts, these metaphors can obscure deeper implications. Understanding how we discuss AI is vital as it profoundly shapes our perceptions and actions concerning this technology.

Metaphors and Their Impact

Metaphors hold a significant power in shaping our thoughts, especially when addressing complex subjects. This is evident in the work of political speechwriters, public relations experts, and advertisers, who use metaphors to focus certain perspectives while narrowing others, often without our conscious awareness.

A notable study conducted by Stanford psychologists Paul Thibodeau and Lera Boroditsky illustrates this. Participants were presented with a brief passage about crime in a fictional city, Addison, framed either as a ‘beast’ or a ‘plague.’ Interestingly, those who viewed crime as a beast leaned towards ideas of increased policing and punishment, while those who saw it as a plague were more inclined to suggest social reforms and rehabilitation strategies.

The researchers noted that these metaphors are not mere embellishments; they significantly affect how we conceptualize and act regarding societal issues. Perhaps most striking is that participants seldom acknowledged the influence of metaphors on their thinking, with only 2 percent recognizing their impact.

The Tool Metaphor’s Limitations

Viewing AI as merely a tool restricts our thinking, implying moral neutrality; tools can be employed for both good and bad purposes, and the onus is placed on the user. Under this framework, when issues arise, they are attributed to misuse or misapplication of the technology. This leads to discussions about responsible AI use, ethical considerations, and the necessity of evaluating outputs—all categorized under AI literacy.

This framing conceals AI’s role in shaping our thoughts and reduces inquiries into the ethical responsibilities of the companies and developers creating these systems. It inhibits us from considering how AI might influence our understanding, manipulate our judgment, or modify our intellectual habits. As individuals who pride themselves on being adept at using tools, it is challenging to see AI as a participant in our cognitive processes, contributing to our perception of reality.

The Emergence of a Hidden Curriculum

Is it overly dramatic to suggest that we are witnessing the rise of a second hidden curriculum? The “assistant” metaphor implies a hierarchy in educational settings, positioning professors and students as the primary authorities while AI merely assists. This perspective masks instances in which AI actively guides learning—structuring explanations, highlighting interpretations, modeling approaches to new subjects, and directing future inquiries.

This shift represents a new paradigm in how we acquire knowledge, creating an AI-mediated educational framework that runs parallel to established lesson plans and objectives. It’s uncertain whether we, or our students, fully comprehend the implications of this influence. Once again, metaphors complicate our understanding of what is truly occurring.

Unpacking Metaphors and Responsibilities

Metaphors can also mislead us in subtler ways. When AI generates inappropriate or controversial outputs, we often anthropomorphize these systems, attributing agency by stating, for instance, that “Grok is racist” or that the AI has “gone rogue.” This tendency not only misrepresents the nature of these systems but also distracts us from our responsibilities as users, and the institutional and corporate decisions involved in developing these technologies. Consequently, accountability diminishes, along with our ability to exercise critical judgment.

Revising Our Vocabulary

Both the ‘tool’ and ‘assistant’ metaphors invite misplaced trust where vigilance and careful scrutiny are required. So, what should we do? While metaphors can simplify complex ideas, they become obstacles when engaged in thoughtful lesson planning and policy development. It is essential to switch to more precise language, choosing our words with care rather than convenience.

By adopting technical terminology, we can clarify our discussions. For instance, instead of saying AI “suggests” or “answers questions,” we can refer to algorithmic outputs generated under specific conditions. Rather than “asking AI,” we engage in probabilistic text generation.

Emphasizing Accountability

This linguistic shift highlights our interpretive and evaluative roles. When we refrain from calling AI “hallucinatory,” opting instead for “predictive text failure,” verification becomes a standard part of our processes, not merely optional. Prompting should be understood as experimental language design, contrasting with using a typewriter or conversing with a tutor.

By insisting on terminological precision, we can resist the temptation to view AI as merely a helper. Instead, we must confront the uncomfortable reality that these systems actively shape our thoughts, discourse, and learning experiences.

Ultimately, we do not require improved metaphors for artificial intelligence. Rather, we need to reduce their prevalence and embrace a more disciplined approach to thinking, discussing, and writing—ensuring moral responsibility, sound judgment, and pedagogy remain firmly within human control.

James Garvey is the chair and professor in liberal arts at the College For Creative Studies in Detroit.

Leave a Reply

您的邮箱地址不会被公开。 必填项已用 * 标注

You May Also Like