In the swiftly evolving realm of higher education policy, discussions frequently swing between caution and rapid innovation. On one hand, some lawmakers advocate a “wait and see” stance, while on the other, technology supporters urge aggressive progress. However, for a country where daily uncertainties are a reality, there is no time to wait. As technology moves swiftly from traditional text to advanced multimedia and autonomous solutions, Ukraine finds itself without formal AI legislation yet still effectively navigating this landscape through a resilient framework of soft law.
Ukraine’s approach to managing AI in education offers an intriguing model for others globally. This initiative isn’t merely theoretical; it is a practical, tiered strategy designed for agility. Two pivotal documents underscore this framework: the Recommendations for the responsible implementation and use of AI technologies in higher education institutions and the Instructional and methodological recommendations on the introduction and use of AI technologies in secondary education institutions. Together, they signify a shift in perspective—a recognition that AI should be seen not merely as a disruptive force but as essential national infrastructure that requires thoughtful governance.
A Tiered Governance Ecosystem
The primary challenge in AI governance stems from the growing divide between formal legislation and everyday application. While national bodies grapple with crafting inclusive laws, students and educators are already integrating AI into their workflows as a foundational tool. Ukraine has opted to embrace a soft law approach, which, contrary to its name, functions as a dynamic steering mechanism that can adapt to technological advancements.
This ecosystem comprises four distinct layers to ensure that overarching ethical principles translate into practical guidelines within educational syllabi:
- National guidance: High-level principles and a common vocabulary established by the Ministry of Digital Transformation and the Ministry of Education and Science.
- Sector-level codes: Quality assurance standards that convert high-level principles into shared templates for the broader academic community.
- Institutional policies: Specific rules regarding roles, procurement, and data handling.
- Course-level rules: Where educational outcomes intersect with reality through AI clauses in syllabi and assignment-specific instructions.
For higher education institutions, effective governance is anchored on five operational pillars: Rules, Roles, Workflows, Training, and Review. This framework ensures that the adoption of AI is a strategic institutional initiative rather than a series of disconnected individual efforts.
HE and Schools Playbooks
The guidelines for higher education are framed not as a manifesto but as an “implementation kit.” The objective is to transition from a binary mindset of “ban vs. allow” to a more nuanced model of institutional governance.
Aligned with the EU AI Act, these recommendations categorize AI applications based on risk. Tools used for critical decision-making—such as student admissions, grading, or behavioral monitoring—are deemed high-risk. Instead of an outright ban, which is acknowledged as “short-sighted and harmful,” the emphasis has shifted towards employing a rigorous risk-screening algorithm. Universities are advised to assess tools using a “red flag” system, where those posing threats to human rights or data privacy are classified as Unacceptable/High Risk and either rejected or closely monitored. Tools assessed as Medium/Low Risk can be utilized with specific safeguards in place.
This vetting process involves a “suitability checklist” evaluating the tools’ functions and alignment with educational goals. It also takes into account critical local factors, such as the availability of Ukrainian-language interfaces and the capability to process payments in local currency—an essential criterion in a war-impacted economic landscape.
The document promotes a “problem-pilot-training-review” sequence whereby universities identify a specific pedagogical or administrative issue, pilot a tool in a controlled setting, train staff on its specifics, and establish a regular review timeline. This approach reflects the understanding that policies developed today may quickly become outdated as autonomous agents gain traction.
While the higher education recommendations focus on professional autonomy, the guidelines for general secondary education prioritize safeguarding minors and fostering foundational cognitive skills. In alignment with global standards and the UNESCO Guidance for Generative AI, it is mandated that AI tools be accessible only to students aged 13 and above, with explicit parental consent required for those between 13 and 18. This measure acts as a crucial protection in a landscape where AI tutoring services often target children without adequate privacy measures.
The school playbook identifies specific “pedagogical substrates” that AI can provide to enhance teaching. AI is defined explicitly as a supplementary tool rather than a primary information source. Notable supports include inclusivity—using speech-to-text and visual generation to assist students with special educational requirements or those displaced due to conflict; enhancing teacher productivity by automating lesson plan creation and administrative tasks; and gamification, facilitating tailored learning paths that keep students engaged in hybrid or remote settings necessitated by war.
Resilience Through Design
The strength of the Ukrainian model lies in its emphasis on immediate operational governance. While many sectors may stagnate in pursuit of the ideal law or policy, Ukraine demonstrates proactive action. Its experience can serve as a practical guide for immediate implementation:
- Create a one-page data rule: Define clearly which sensitive information (grades, health data, confidential research) must never be input into public AI platforms.
- Shift from literacy to verification fluency: Equip staff and students with “verification habits”—the technical and critical skills required to evaluate outputs from language models and recognize inaccuracies.
- Establish simple risk checklists: Each department should adopt a straightforward “red flag” document for vetting new software before integrating it into the curriculum.
- Institute a review cycle: Understanding that the transition toward autonomous agents and multimedia requires policies that are adaptable to technological evolution.
Traditionally viewed as a hindrance to innovation, regulation in the Ukrainian context has taken on the role of a necessary safeguard. By implementing a tiered, soft law strategy, Ukraine has constructed a framework that enables academic freedom while ensuring that AI applications are systematically managed, forming a cohesive ecosystem grounded in rules, roles, and ongoing evaluation.
The insight for the global education sector is straightforward: Anticipate and plan rather than wait for formal legislation. Avoid outsourcing pedagogical decisions to technology providers. Instead, take charge of designing and governing your own educational ecosystem. Ukraine’s approach illustrates that even amid significant challenges, it’s feasible to lead with both innovation and responsibility.