AI Use Policy

Effective: 30 March 2026
Compliant with BSB Core Duties and Bar Council Guidance on Generative AI (November 2025)

1. Purpose and Scope

This policy governs my use of artificial intelligence (AI) tools, including large language models (LLMs) such as ChatGPT, Google Gemini, Perplexity, Harvey, and Microsoft Copilot—in my practice as a sole-trading barrister. It ensures compliance with:

  • BSB Core Duties (particularly CD1: duty to the court; CD2: best interests of client; CD3: honesty/integrity; CD6: confidentiality; CD7: competent work)

  • Bar Council's Updated Guidance on Generative AI (November 2025)

  • GDPR/Data Protection Act 2018

  • Legal Professional Privilege (LPP) obligations

AI use is permitted only as an augmenting tool; ultimate responsibility for all legal work remains with me.

2. Permitted Uses of AI

AI may be used responsibly for:

2.1 Approved Tasks

  • Drafting initial outlines or first drafts of non-substantive documents (e.g., skeleton argument structure, correspondence templates)

  • Summarising well-defined areas of law (where I have expertise to evaluate accuracy)

  • Brainstorming arguments or identifying potential issues

  • Proofreading and editing for clarity/grammar

  • Administrative tasks (calendar management, time tracking, document organisation)

2.2 Conditions for Use

  • I must understand the capabilities and limitations of each AI tool before

  • AI outputs must be verified, reviewed, interpreted, and contextualised for accuracy

  • AI must never replace human judgment or professional expertise

3. Prohibited Uses of AI

3.1 Absolute Prohibitions

  • Submitting AI-generated content to the court without independent verification

  • Citing AI-generated case law, statutes, or legal authorities without confirming:

    • The case exists

    • The citation is accurate

    • The legal proposition is correctly stated

    • The case remains good law

  • Inputting confidential client information, privileged material, or personal data into public AI tools without adequate safeguards

  • Using AI to draft core legal arguments without substantive independent analysis

  • Relying solely on AI for legal research without checking authoritative sources (Westlaw, LexisNexis, Inns of Court libraries)

4. Verification Obligations

For all AI-assisted work, I will:

Legal citations: Confirm case exists, citation accurate, proposition correct, case still good law
Legal analysis: Cross-check against authoritative databases (Westlaw, LexisNexis, BAILII)
Factual assertions: Verify against primary sources (evidence, documents, statutory text)
AI confidence: Never rely on AI's apparent confidence - verify independently regardless

5. Data Protection & Confidentiality

5.1 GDPR Compliance

  • Ensure lawful basis for processing any personal data input to AI systems

  • Verify AI providers offer adequate data protection commitments

  • Review AI tool terms to confirm they permit processing of client matter data

  • Document decisions about AI use where personal data is involved

5.2 Confidentiality Measures (Core Duty 6)

  • Never input client names, case details, privileged communications, or sensitive personal data into public LLMs

  • Use enterprise/private AI instances with data protection agreements where client data must be processed

  • Anonymise data before input where possible (remove identifying information)

  • Maintain legal professional privilege at all times

6. AI Tool Assessment & Registration

6.1 Approved Tools Register

I maintain a register of assessed AI tools.

6.2 Assessment Criteria Before Using New Tools

  • Provider's data protection commitments (GDPR compliance)

  • Data storage location (UK/EEA preferred)

  • Whether outputs are used for model training

  • Security vulnerabilities

  • Reputation and reliability in legal context

7. Competence & Training (Core Duty 7)

7.1 Knowledge Requirements

I will maintain understanding of:

  • AI tool capabilities and limitations

  • Verification best practices

  • Data protection implications

  • Ethical obligations in AI-assisted practice

7.2 CPD Commitment

  • Complete annual training on AI tools and ethical use

  • Stay current with regulatory developments (Bar Council/BSB guidance updates)

  • Understand that competent use of AI is now a professional competence expectation

8. Disclosure to Clients and Court

8.1 Client Disclosure

  • Inform clients when AI will be used in their matter (where material to engagement)

  • Explain limitations of AI and that human oversight is maintained

  • Address any client concerns about AI use

8.2 Court Disclosure

  • Current Bar Council guidance notes no mandatory disclosure requirement yet but similar requirements may develop

  • If directly asked by the court whether AI was used, I will answer truthfully (Core Duty 3: honesty and integrity)

  • Follow any court-specific standing orders on AI disclosure

9. Record-Keeping & Audit Trail

I will maintain records of:

  • Which AI tools were used for each matter

  • What AI-generated content was produced

  • Verification steps undertaken (critical for demonstrating competence)

  • Client consent where AI use is material

  • Data protection assessments for AI tools used

Records retained for minimum 6 years (professional indemnity requirements).

10. Breach Response Protocol

If I suspect AI has produced inaccurate, hallucinated, or fabricated content:

  1. Immediately halt use of that output

  2. Conduct full verification using authoritative sources

  3. If AI-generated content was submitted to court/client:

    • Notify instructing solicitor immediately

    • File correction/corrigendum to court if necessary

    • Document the error and remediation steps

  4. Report serious breaches to BSB if required (Core Duty 9: open and cooperative with regulators)

  5. Review and update verification procedures to prevent recurrence

11. Policy Review

This policy will be:

  • Reviewed annually (minimum)

  • Updated immediately following BSB/Bar Council guidance changes

  • Revised after any AI-related error or regulatory development

Next review date: 30 March 2027

Contact