Consider the contrasting experiences of two university students navigating their educational journeys.
Student A possesses a high-performance laptop, enjoys a private study space, subscribes to premium GPT services, engages in Discord study groups, and shares “prompt engineering packs” with peers.
In contrast, Student B relies on a communal device, struggles with unreliable wifi, manages family and job commitments, fears repercussions from using AI, and strictly adheres to the official guidelines provided by their institution.
AI is not only transforming assessments; it is also giving rise to an unofficial “shadow AI university.” This hidden ecosystem allows digitally savvy and well-resourced students to experiment with AI, creating an alternate learning environment. If universities fail to acknowledge these practices as the foundation for an AI-focused educational strategy, they risk perpetuating a divided student experience and unequal institutional support.
Student-led AI-enhanced and Constrained Learning
Students are actively creating what can be described as a “shadow AI university”—a parallel network of learning that exists almost entirely beyond the purview of educational institutions. In channels like WhatsApp and Discord, AI-savvy students collaborate to develop their own learning tools, exchanging prompts and recommending the best AI solutions for various courses. These informal networks act as innovation hubs where students collectively experiment and refine their learning processes.
AI is currently used by students for a wide array of academic tasks, such as preparing for lectures by reviewing materials from learning management systems, summarizing lengthy readings into digestible formats, or creating practice questions for study. Beyond academic applications, AI assists in drafting emails and offers emotional support during stressful times, highlighting how integrated these tools have become in students’ daily lives.
The ingenuity and complexity of these AI practices are noteworthy. Rather than merely providing shortcuts or facilitating cheating, these methods represent an emerging, student-driven approach to learning. Unfortunately, universities often overlook this innovation because their focus tends to be on compliance and misconduct, especially regarding AI use in assessments. By shifting from a defensive stance to one of collaboration, institutions may begin to appreciate and learn from the dynamic AI learning communities students are already fostering.
The Two-Tier Learning System
The growth of informal, student-led AI initiatives has led to the development of a two-tier learning experience within universities. While access to generative AI tools may seem universal—with many free options available—most universities lack a cohesive set of enterprise AI resources for learning, teaching, and assessment. High costs prevent universities from providing these resources widely, which forces students to seek varying levels of AI tools outside their institutions at their own expense. Additionally, a student’s ability to effectively utilize AI is significantly influenced by their digital skills and confidence. Those without reliable devices, fast internet, prior experience with educational technology, or strong peer networks are at a disadvantage when it comes to leveraging AI as a learning tool.
In essence, AI does not inherently level the playing field for students; without targeted support, it primarily benefits those who are already well-positioned to succeed. This issue intertwines with existing equity challenges in universities. Commuter students facing long travel times, individuals balancing work and study, underrepresented groups, and international students all encounter obstacles that impede their ability to develop AI literacy.
Challenges of “AI Permitted” Environments
Current institutional strategies regarding generative AI often start from a risk-centered perspective, emphasizing misconduct and the detection of AI in assessments. Such an approach treats AI primarily as a threat to academic integrity and neglects the vibrant ecosystem of informal AI practices that students have already established—an ecosystem that is often messy, creative, and largely invisible to institutions.
By focusing on compliance, universities risk overlooking the innovative and practical learning methods that students are developing. They also fail to address the disparities in access to devices, tools, and AI knowledge that significantly influence who benefits from these technologies. Even if university policies state that “AI is permitted,” such declarations do little to assist students lacking the necessary digital resources, skills, or confidence to experiment safely and effectively.
Ironically, risk-averse strategies can push AI usage deeper underground, making the “shadow AI university” more difficult to observe and understand. As students grow more cautious about sharing their practices, the gap between institutional narratives and actual student experiences widens. By treating AI predominantly as a compliance issue, universities miss valuable opportunities to address emerging pedagogical and equity challenges.
Developing AI Strategies through Co-Creation and Equity
To create an effective AI strategy focused on equity, universities must move away from mere compliance and toward authentic collaboration with students. This approach involves acknowledging—and not suppressing—the informal learning practices enabled by AI. Furthermore, institutions should ensure that all students have access to reliable and institutionally supported AI tools, preventing them from being limited by their financial resources or forced to use unreliable platforms.
However, access alone is not enough; AI literacy must be integrated into the curriculum. Educators must guide students through their developmental journey over the course of their studies, rather than offering AI literacy as an optional add-on. Students should learn to critically assess AI outputs, understand the ethical norms in their fields, and recognize when it may be inappropriate to use AI.
Reflecting on the two students discussed earlier, their paths will likely diverge throughout their university experiences unless proactive measures are implemented. One student may thrive in an AI-empowered environment, while the other may find themselves restricted by uncertainty and lack of access. This presents a genuine and pressing risk.
Ultimately, the key question for university leaders is not simply, “What is our AI policy?” but rather, “Who might be excluded from our AI future, and how can we involve them in its design?”