As the landscape of education evolves, the introduction of generative artificial intelligence (AI) in classrooms has sparked heated debates within the academic community. The Conference on College Composition and Communication (CCCC), the largest professional organization for writing educators, asserts that the adoption of such technologies is not an unavoidable outcome. Instead, they advocate for the rights of both students and faculty to opt-out of using generative AI in writing courses.
Earlier this month, the CCCC passed a resolution that emphasizes this stance. The resolution highlights concerns about the unverified claims suggesting generative AI enhances productivity. It also addresses various issues, such as data privacy, labor rights, academic freedom, environmental impacts, and the vital critical thinking skills developed through the writing process.
According to the resolution, “The work of college writing instruction must be responsive to industry trends and other external factors, but it should not be primarily oriented toward preparing students for a narrow set of technological skills.” It further stresses that rhetoric, composition, and writing studies aim to equip students with the ability to write for purposes beyond mere employment. Students learn to write to navigate complexities, access resources, understand phenomena, foster connections, build communities, process emotions, and actively participate in civic life.
The resolution, overwhelmingly endorsed at the CCCC’s annual convention in Cleveland two weeks ago, showcases the commitment of the writing education community to uphold the right to refuse the use of generative AI, according to Jennifer Sano-Franchini, an associate professor of English at West Virginia University and the immediate past chair of the CCCC.
“This is fundamentally a matter of academic freedom—students and educators should have the autonomy to make their own choices. Dismissing alternatives by stating things like ‘you must use it’ or ‘it’s here to stay’ undermines this freedom,” Sano-Franchini explained. “While these claims are prevalent, I remain unconvinced of their validity.”
The CCCC’s resolution comes more than three years after OpenAI debuted ChatGPT, a tool capable of generating essays, research papers, and creative writing in mere seconds, igniting an unprecedented collaboration between higher education institutions and tech companies driven by profit.
Initially, the emergence of ChatGPT and similar generative AI tools evoked concerns from educators regarding the potential for increased cheating, which has indeed been noted in recent data. Simultaneously, institutions have been inundated with predictions from the tech industry that generative AI could eliminate numerous entry-level employment opportunities, while also claiming that job seekers adept in AI would have a competitive advantage.
“I initially felt compelled to familiarize myself with it,” Sano-Franchini shared with Inside Higher Ed. “However, I began to notice some students misusing it… I don’t ban it outright, but I also don’t endorse it.”
Instead, Sano-Franchini, whose research examines the interplay of culture, power, and technology, designs writing assignments that would likely challenge generative AI, incorporating elements from prior class discussions. However, she recognizes that her colleagues may choose to incorporate AI differently and is concerned about what students may lose as they increasingly depend on these technologies for writing tasks.
“These companies capitalize on individuals’ writing insecurities. Writing is inherently challenging, and I understand why outsourcing it to a language model could seem attractive,” Sano-Franchini remarked. “However, without investing time to read and comprehend diverse arguments, fostering a collective dialogue about topics and further developing our thoughts becomes incredibly difficult.”
Some of her students have expressed growing skepticism about the prevalence of generative AI in modern society while also confronting the ramifications of forgoing this technology.
Colleen Benison, a master’s student specializing in writing and editing at WVU, noted that while her program has largely evaded pressure to adopt generative AI, she is aware that many students in other programs feel differently. She believes these students should have the freedom to opt-out, emphasizing, “If higher education truly aims to cultivate new knowledge and sharpen critical thinking skills while fostering scholarly discourse, students risk neglecting these goals by relying on AI.” Benison, who does not use generative AI herself, added, “Despite the rhetoric about being left behind, there’s immense value in reaffirming the significance of human intelligence. Refusing to use AI doesn’t mean we’re falling behind.”
‘Profiteers and Opportunists’
Despite apprehensions among students and faculty, numerous colleges and universities are rapidly embracing generative AI. Many are making substantial financial commitments.
Several institutions, including Arizona State University, the California State University system, and the University of Colorado at Boulder, have entered into multimillion-dollar contracts with tech companies to provide access to proprietary generative AI tools, touted as essential for workforce development and AI literacy.
Many students and faculty members claim they are frequently excluded from these decision-making processes and find themselves without a choice about using generative AI in their learning environments.
A 2025 survey conducted by the American Association of University Professors revealed that 15% of faculty indicated their institutions mandate the use of AI, while 81% reported being required to utilize learning management systems and educational technology integrated with AI tools they cannot disable. Concurrently, 69% expressed concerns that AI negatively impacts student success, with 95% underscoring the necessity for meaningful opt-out policies.
The CCCC’s resolution posits that choosing to refuse AI empowers educators and students alike.
“By opting out of generative AI, we can step back from the pervasive opt-in culture propagated by Big Tech. It allows for a reevaluation of our interactions with corporate technologies that profit from student and teacher data and intellectual labor, including software for plagiarism detection and learning management systems,” it states.
The CCCC is not alone in voicing apprehensions about the threats generative AI poses to educational practices, though other academic organizations have stopped short of authorizing outright refusal of the technology.
“Students across the spectrum already utilize generative AI tools and will continue to do so,” the American Historical Association’s guiding principles for AI in history education state. “While some dedicated educators have opted to reject generative AI due to its ethical, environmental, and economic implications, ignoring this technology will not impede its advancement or shield our discipline and students from its influence.”
The CCCC has carefully crafted its resolution to avoid assumptions about the inevitability of generative AI’s spread. “We’re neither saying it must be used nor that it must be avoided,” Sano-Franchini clarified. “For too long, individuals have felt coerced into using it, as if not participating equates to ignorance of progress. We aim to challenge that narrative and provide compelling reasons for considering abstention.”
While other areas of academia might not have explicitly endorsed the option to opt-out, the CCCC is part of a burgeoning movement advocating for such a right among faculty and students. Last summer, over 1,000 education professionals from institutions worldwide signed an open letter opposing the adoption of generative AI in education, citing it as a threat to student learning and well-being amidst a vast marketing campaign presenting these tools as vital for future careers, despite a lack of substantial evidence supporting their efficacy in enhancing learning outcomes.
For those in academia rallying against generative AI, emphasizing the right to refuse the technology while critiquing the motives of tech companies is seen as a robust approach. Sonja Drimmer, an associate professor of medieval art and architecture at the University of Massachusetts at Amherst, who has written extensively about resisting generative AI in education, remarked, “Concerns surrounding plagiarism distract us from recognizing that profiteers and opportunists are our true adversaries. The term ‘inevitability’ has historically been used to diminish any form of opposition. It’s crucial to ask who is disseminating that narrative and for what purpose.”
Even as questions about generative AI’s potential to improve educational outcomes remain unanswered, Drimmer believes it is essential to scrutinize the sources of pressure for adopting these technologies in the higher education sector. She contends that the CCCC’s resolution serves as a strong countermeasure to this pressure.
“Urgency often overwhelms the consumer’s ability to pause and reflect on whether what’s being offered is genuinely necessary,” she stated. “I see no justification for urgency. The narrative ‘But we have to keep up’ is persuasive, yet it’s rarely followed by inquiries like, ‘Keep up with what?’”
In conclusion, as the debate around generative AI in education continues, the CCCC’s resolution stands as a crucial reminder of the importance of academic freedom and the need to critically evaluate the tools being integrated into the learning environment. The right to choose whether or not to embrace these technologies is paramount in nurturing critical thinking, meaningful dialogue, and preserving the integrity of educational practices.