From education to employment

AI Governance in FE: Why We Need More Colleges to Publish AI Policies

AI adoption across the FE sector is starting to accelerate, but institutional awareness and readiness are lagging behind the pace of technological change.

Through my engagement with FE institutions, it is clear that a significant number of colleges remain at an early stage of integrating AI into their operations. Recent analysis by MKAI suggests that between 70% and 85% of educators have yet to meaningfully incorporate AI into their workflows on a regular basis. This indicates a notable gap between technological possibilities and practical readiness, highlighting potential vulnerabilities across ethical, operational, and security dimensions.

The speed at which AI advancements—Claude 3.7 for coding, OpenAI Deep Research, Alexa+, to name just a few—are entering the mainstream underscores the need for colleges to adopt clear governance frameworks. Without well-defined policies to guide AI usage, institutions risk encountering avoidable challenges, ranging from issues around academic integrity to inadvertent security breaches or inconsistent operational practices.

FE colleges face a rapidly changing AI landscape—those without a high-quality AI policy are unprepared for the ethical, security, and operational challenges ahead.​​

Emerging Risks: Ethical, Security, and Operational Threats

The risks of inaction are not abstract—they are specific and pressing.

FE colleges without an AI policy face threats on multiple fronts:

  • Ethical and Legal Risks: Without guidelines, the use of AI can easily slip into ethically gray areas. AI systems may unintentionally perpetuate bias or unfairness in everything from admissions to grading​. A teacher experimenting with an AI tool could, for example, unknowingly violate student privacy by feeding sensitive data into a third-party service. Questions of intellectual property also loom large: Who owns AI-generated content, and how can we prevent “AI malpractice”? An effective policy sets clear rules to promote ethical, transparent, and fair use of AI, addressing issues of bias, data privacy, copyright, and academic integrity​. Crucially, it should align with data protection laws (GDPR) so that the college does not run afoul of legal requirements when using AI​.
  • Security Risks: AI systems, especially generative AI, introduce new security considerations. For instance, staff or students might input confidential information into AI chatbots, unwittingly causing a data leak. There are also concerns about the robustness of AI tools—malicious actors could try to manipulate AI outputs or use AI to generate convincing phishing messages targeting the college. A policy helps establish approved AI platforms and security protocols, ensuring that any AI tool deployed has been vetted for safety and that users are trained in secure practices. It also prepares the college for scenarios like AI-generated disinformation or deepfakes that could impact the institution’s reputation and student safety.
  • Operational and Educational Risks: Perhaps the most immediate threat of not having a policy is operational chaos. In the absence of guidance, different departments or teachers may adopt AI in inconsistent ways​. One teacher might heavily use ChatGPT to support students, while another might not feel comfortable with the output. Such disparity can lead to an uneven learning experience, where some students are advantaged and others left behind depending on their teacher’s approach​.
  • Consistency is key: a policy ensures everyone is on the same page about how AI is used in teaching, assessment, and daily operations. Moreover, without a policy, staff receive mixed signals about AI use—some may avoid using AI due to uncertainty or fear, while others plunge in without support, leading to misuse or overreliance. This inconsistency can erode academic standards and student trust. Operational risk also extends to things like system outages or errors: if an admissions department started using an AI system without oversight and it failed, how would the college respond? A comprehensive AI policy includes contingency plans and clarity on human oversight, so AI remains a tool supporting educators and administrators, not erroneously replacing their judgment.

Lessons from Early Adopters: Policies in Action

Several forward-thinking colleges have already begun to navigate this terrain, offering models of what good AI policy looks like—and highlighting what gaps remain in the sector.

To support leaders with navigating the next steps in AI policy, MKAI has created an executive report – “Artificial Intelligence Policies in Further Education: A Strategic Briefing for College Leaders” – which compiles insights from these early adopters.

We found that colleges with a strong policy make it explicit that ultimate accountability lies with humans, and that AI’s role is supportive. They plan to update their guidelines as new AI capabilities (or risks) emerge, treating the policy as a living document.

Access the new MKAI Report here.

Despite these exemplars, gaps remain across the FE sector. The comparative analysis in the MKAI briefing found that while some colleges have robust, comprehensive AI strategies, others have taken only partial steps. For example, one institution’s policy might cover classroom usage and academic honesty, but say little about data protection or the use of AI in administrative departments. Another might outline staff responsibilities but not student guidance, or vice versa. And most FE colleges still have no published AI policy at all​. Many are in the process of drafting one, often waiting for more national guidance or struggling to find the time and expertise to do it​. This lag is understandable—AI in education is a fast-moving target and there is no long-standing template to follow. However, it is also perilous. Every term that passes without a policy is time in which AI use (or misuse) can proliferate unchecked. College leaders should not assume they can afford to wait for a perfect, complete answer from elsewhere. The better approach is to start with a basic policy now and refine it over time. Even an initial framework addressing the most urgent concerns is better than a vacuum.

Proactive Leadership: The Time for Action is Now

AI is often described as the next industrial revolution for education. With such high stakes, FE colleges cannot rely on ad-hoc reactions or informal understandings to get them through. What’s needed is bold, proactive leadership—college executives who anticipate the changes and steer their institutions safely through them. An AI policy is a manifestation of that leadership: it’s a strategic move that declares, “We acknowledge this seismic change, and here is how we will harness it for good while protecting our community.”

With a thoughtful AI policy in place, a college can confidently adopt AI tools to enhance teaching and streamline operations, knowing that there are guardrails to prevent things from going off track. It builds institutional resilience: when the next AI breakthrough hits (and it could be just weeks away), the college will have a framework to evaluate and integrate it, rather than scrambling in panic. Conversely, without a policy, each new AI development will bring confusion and risk. We must lead with foresight, ensuring our Further Education sector is prepared.

By Richard Foster-Fletcher, Executive Chair of MKAI


Related Articles

Responses