14 August 2025

What I learned from co-authoring the AI Action Plan for Justice

Franzi Hasford
AI Fellow, Justice AI Unit
Co-authoring the AI Action Plan for Justice

The Ministry of Justice just launched its AI Action Plan for Justice, a three year plan to embed AI safely and responsibly across the justice system. Read it here — this post is not a recap.

I’ve been reflecting on what I’ve learnt from co-authoring it with our secure, general-purpose AI tools. As I return from my holiday in the Alps, a German word comes to mind: “Gratwanderung”. The literal translation means walking along a steep mountain ridge. It refers to a careful balancing act, requiring judgment. It’s an apt description of my experience:

1. Function over form

Our secure enterprise version of ChatGPT Enterprise has been an invaluable companion, drafting, summarising, reviewing and critiquing the many versions of our document. But even its o3 advanced‑reasoning model still lacks contextual awareness. That means there’s always a risk of over‑relying on AI‑generated content. As per recent research from MIT Media Lab (Kosmyna et al., 2025), participants writing SAT‑style essays using LLM assistants showed significantly lower brain engagement and produced more formulaic essays with less originality.

In our case, AI produced a polished first iteration of the plan, but one that tried to boil the ocean, hadn’t accounted for tough choices or tested ministerial ambition. So critical thinking still matters. Let’s not become lazy by taking AI at its word but use it to kick-start and enhance our work. Prompts that critique our work can encourage deeper reflection and broader inquiry, leading to a better product.

2. Speed vs. depth

With each passing day without a clear and transparent AI Action Plan, trust erodes, both within the organisation and beyond. Some may perceive us as leaving benefits on the table; others may question intention or ethical use. Speed was of the essence and our AI tools could expedite the writing process. But an AI Action Plan is worthless without action being taken on the back of it.

We therefore invested significant time in collaboration and co‑design. Months of in‑depth stakeholder engagement should now have paved the way for timely implementation. More broadly, we should all challenge ourselves regularly on whether we invest the time gained from AI productivity tools in meaningful human interaction and tangible action.

3. AI exceptionalism or just another tech?

I have been asked why we need an AI Action Plan, a specialist unit and a steering committee — aren’t we just riding the hype cycle? I don’t think so. AI’s transformative power, rapid evolution and high‑stakes risk outstrip existing remits and merit focused investment, at least for now. Foundation models are genuinely general‑purpose: one model can solve problems across the department. We treat many AI solutions as a product, letting us scale solutions fast.

And the landscape changes daily — new model generations appear every month and announcements flood our chats. Traditional policy and budget cycles can’t keep up. A dedicated expert group is vital to sift signal from noise at pace. AI’s huge upside is also matched by systemic risks that require specialist expertise. Ethical, trustworthy use needs strategic head‑space without spawning parallel governance.

The Plan is just the start of our journey to become the leading department in safe and responsible AI adoption for the benefit of the public and our workforce. The balancing act will continue but we are well equipped to master it. As AI becomes integral to our work, let’s focus on what we do best: ethical judgment, purposeful decision‑making, empathy and care.

Want to learn more?

Meet the team behind our work and learn about our mission.