AI Action Plan for Justice

A roadmap for safe and effective AI adoption in justice

Overview

The AI Action Plan for Justice sets the strategic direction for responsible AI use across the Ministry of Justice over the next three years. It focuses on:

  • Strengthening AI foundations
  • Embedding AI across services through a Scan, Pilot, Scale model
  • Investing in the people and partnerships that make this possible

Quick Links

Access the full plan and related resources

Read the full plan

Read the complete AI Action Plan for Justice (HTML)

Executive Summary

Read a concise overview of the key points and objectives

Roadmap

Explore our implementation timeline and key milestones

Our Roadmap for AI Delivery

A 3-year journey guided by our "Scan, Pilot, Scale" approach.

2025

Year 1

Establish foundations and deliver early wins

  • 🟢Roll out secure AI productivity tools across the department
  • 🟢Pilot domain-specific AI (e.g., chat, search, transcription)
  • 🟢Build AI capability and governance structures

Laying the groundwork: embedding safe, scalable AI across core services

2026

Year 2

Scale what works and deepen transformation

  • 🟡Expand successful pilots across agencies
  • 🟡Integrate AI into frontline operations and case handling
  • 🟡Use time saved to reinvest in better public and staff experiences

From promising pilots to embedded solutions that improve delivery

2027

Year 3

System-wide AI integration at scale

  • 🔵Deliver scaled, interoperable AI solutions
  • 🔵Make AI part of how we work every day, from decisions to operations
  • 🔵Enable smarter, joined-up use of data across the system

Enabling fairer, faster, and more personalised justice

Guiding Principles

Our approach to AI in justice is guided by these core principles:

Put safety and fairness first

AI in justice must work within the law, protect individual rights, and maintain public trust. This requires rigorous testing, clear accountability, and careful oversight, especially where decisions affect liberty, safety, or individual rights.

Protect independence

AI should support, not substitute, human judgment. We will preserve the independence of judges, prosecutors, and oversight bodies, ensuring AI works within the law and reinforces public confidence.

Start with the people who use the system

We will design AI tools around the needs of users, e.g. victims, offenders, staff, judges and citizens. That means solving real problems, co-developing solutions with users, and localising services to reflect the diverse realities of justice.

Build or buy once, use many times

Build common solutions that can be used across the system where possible, reducing cost and duplicated effort.