Categories AI

Starting AI Policy with Values Over Tools: What You Need to Know

The rapid emergence of artificial intelligence in higher education prompted a swift and largely uniform response from institutions: tighten regulations, issue warnings about potential academic dishonesty, and invest in systems that could distinguish genuine student contributions from artificial ones. However, this impulsive reaction often cast AI as a challenge to be managed rather than as an opportunity for innovation.

Across the sector, the urgency of the moment led to the creation of policies that tended to be reactive, focused on tools, and primarily concerned with mitigating risks. In many cases, rather than viewing AI as a chance for transformative teaching and learning, institutions perceived it as a dilemma that needed resolution.

At our university, we recognized early on that adhering strictly to this path would not only hinder our aspirations but also undermine our core values. As an institution dedicated to social purpose, our commitments extend beyond mere operational efficiency or compliance with regulations. Values such as inclusion, sustainability, human dignity, equity, and transparent communication underpin everything from our curricula to our community collaborations. If we approached AI exclusively as a technological challenge, we would compromise our identity. Rather than asking, “What tools should we permit?” we instead posed a more fundamental question: “What responsibilities does our educational mission entail in an age dominated by AI?”

The Moment of Clarity

This pivotal inquiry crystallized during UNESCO Digital Learning Week in September. Engaging with educators, policymakers, and researchers from around the globe illuminated a crucial truth: AI has the potential to deepen the inequities that already exist within global education. While discussing with colleagues from regions facing bandwidth limitations, institutions unable to afford commercial tools, and communities with rich but historically marginalized linguistic diversity, it became evident that the dialogue surrounding AI transcended concerns of academic integrity and administrative efficiency. The real issue was understanding who might be overlooked as AI becomes the backbone of learning and who could be rendered invisible by systems predominantly trained on Western, English-centric, affluent data.

These discussions marked a turning point, dispelling any notion that a responsible AI policy could be formulated in a vacuum, disconnected from issues of justice, geopolitical contexts, or environmental realities. It became increasingly clear that our institutional response must strive to bridge, not widen, global divides. Furthermore, if we aim to lead in this area, our policy must take into account this broader moral landscape. AI adoption should not replicate educational colonialism or exacerbate disparities between resource-rich and resource-poor institutions. Most importantly, it should avoid inadvertently perpetuating the biases inherent in the datasets that inform it.

Upon returning from UNESCO, we approached our formal policy development with a heightened sense of ethical duty. We concluded that the advent of AI in higher education necessitated a reassessment of our foundational principles. This realization led us to anchor our policy specifically in the framework of ecopedagogy, drawing on the comprehensive policy we finalized this year.

Ecopedagogy, introduced to us earlier this year at the EDEN Conference in Bologna, in a paper by Wilson & Wardak, provided a broad framework capable of addressing the challenges we encountered at UNESCO: not only the digital divide but also the environmental costs associated with large language models; not just algorithmic bias but the way AI can centralize epistemic authority within frameworks that reflect only a narrow slice of global knowledge; not merely increased operational efficiency but the human labor that can either be concealed or displaced in the transition.

Viewing AI through this lens transformed our entire policy-making approach. Rather than creating a compliance-oriented document, we assembled a diverse, cross-institutional working group comprised of academic staff, professional services personnel, digital experts, and students. The group’s extensive membership was purposeful, as AI impacts various aspects of university life, from assessments and curricula to student wellbeing and sustainability. No single area of expertise could adequately address the breadth of these concerns. Thus, our working group became a collaborative space for reflection, allowing conversations on topics ranging from the carbon footprint of AI tools to the cultural biases inherent in chatbots, with students voicing their worries about misconduct allegations and the ethical ramifications of employing tools built on unconsented labor.

From Values to Coherence

What emerged from this process was not merely a collection of reactive regulations, but a coherent narrative. We recognized that AI was already altering the ways students learn, collaborate, write, and express themselves. We understood its implications for faculty workloads and digital confidence, as well as its impact on students with limited access to devices and reliable internet connections. Most critically, we grasped that a student in London using the same technology as a student in Nairobi does not engage in the same interaction, lacking equitable access in terms of bandwidth, cultural context, linguistic recognition, or environmental cost, a fact that UNESCO made painfully apparent.

Only after articulating this comprehensive landscape did the foundational principles of our policy begin to materialize. The foremost principle, straightforward yet challenging to implement, was that AI should serve to enhance human learning rather than detract from it. This tenet became our guiding anchor, signifying that we could not view automation as an unqualified good despite its inevitability. Instead, we had to question what forms of learning might suffer if overshadowed by generative tools. Our focus needed to shift toward nurturing students’ critical judgment rather than merely their technical proficiency.

From this primary principle, additional tenets naturally followed. If we are committed to broadening participation, then AI integration must prioritize inclusivity and cultural relevance. If sustainability is a core value, we must consider the environmental implications of AI adoption. If transparency is a priority, both faculty and students need to disclose how they utilize AI. If academic integrity is paramount, our focus should be on fostering integrity rather than merely policing misconduct. Finally, if we intend to equip students for a swiftly evolving job market, AI literacy must be integrated into the curriculum rather than relegated to optional workshops.

As we translated these principles into actionable practices, our policy grew into something far more ambitious than we initially envisioned. Curriculum design now includes discipline-specific AI literacy. Assessment procedures will require a clear articulation of permissible AI uses and rationales. We will develop toolkits for both staff and students, establish an AI champions network, and rather than producing a static rulebook, we are designing a dynamic framework that adapts to technological, pedagogical, and societal changes.

An Act of Self-Definition

In many respects, one of the most significant lessons from UNESCO is that discussions about AI in education at the national level are insufficient. Universities do not exist in isolation; they participate within a global ecosystem influenced by unequal access to infrastructure, inconsistent regulatory frameworks, and varying cultural attitudes toward technology. Our policy, therefore, reflects not only our institutional values but also a commitment to global responsibility. It aims to demonstrate how ethical, reflective, and inclusive AI adoption can take shape, even within a sector often caught in a struggle between innovation and apprehension.

If there is one message we would impart to the wider educational community, it is this: the urgency lies not in how swiftly institutions can formulate AI policies but in the narratives those policies convey. A policy rooted in fear will create a defensive educational environment. A policy fixated on tools will quickly become outdated as the technology evolves. Conversely, a policy anchored in values—shaped by global insights, ecological awareness, and educational purpose—will empower universities to navigate uncertainty with integrity.

What initially seemed like a technical challenge transformed, through our participation in UNESCO and our collaborative processes, into a significant act of institutional self-definition. By grounding our policy in our core identity rather than in the capabilities of AI, we shifted from merely reacting to disruption to actively shaping our response to it. In this journey, we discovered that AI policy transcends technology; it encompasses the kind of educational future we aspire to cultivate, both locally and globally. Thus, for us, grounding our approach in values was not just the right starting point; it was the only viable way forward.

Leave a Reply

您的邮箱地址不会被公开。 必填项已用 * 标注

You May Also Like