AI for Public Defenders

Getting Started with AI 

Step

What to Nail Down

Pro Tips

1. Security Review 

  • Confirm vendor hosts data in CJIS‑compliant environment.
  • Execute NDA & confidentiality addendum. 
  • Decide what never goes into Gen‑AI prompts (e.g., client identifiers).
Loop in IT & ethics counsel early; run a tabletop breach scenario.

2. Pilot Design

  • Choose one high‑volume use case (e.g., body‑cam transcription).
  • Limit to 5‑10 attorneys & 50–100 cases.
  • Set a start/stop date (e.g., 90 days).
Pre‑collect “hours spent” baseline so you have a before/after.

3. Training Plan

  • 2‑hour kickoff workshop: features, dos & don’ts, verification steps.
  • “Cheat‑sheet” on prompt templates & red‑flag errors.
Pair each user with a “super‑user” for just‑in‑time help.

4. Success Metrics 

  • Efficiency: avg. hours saved per case.
  • Quality: % transcripts with ≤1 material error; motion‑grant rate.
  • User adoption: log‑ins per week, survey of attorney satisfaction.
Review metrics at weeks 4 & 8; decide by week 12 whether to scale.

Limitations and Ethical Considerations of AI Tools in Criminal Defense

Tool Category

Specific Tools Discussed

Key Limitations

Primary Ethical Concerns

Mitigation Strategies/Requirements

Transcription (A/V Evidence)

JusticeText, Reduct.Video, LegalServer AI

Accuracy dependent on audio quality; potential for errors misrepresenting evidence 6

Competence (verifying accuracy); Confidentiality (handling sensitive recordings); Potential over-reliance due to time pressure 6

Mandatory human review & correction; User training on limitations; Vendor vetting for security/accuracy claims; Use human transcription for critical/court use 23

Legal Research & Drafting (GenAI)

Casetext CoCounsel, Westlaw AI, Lexis+ AI

Prone to “hallucinations” (false info/citations); Variable reliability; Requires skilled prompting 68

Competence (verification mandatory); Candor to Tribunal (avoiding false submissions); Confidentiality (inputting client data); Fees (billing for efficiency) 22

Rigorous verification against primary sources; Training on prompt engineering & limitations; Clear policies on data input; Client consent if needed; Fair billing practices 22

Case Management (AI Integration)

LegalServer AI Suite; (LACPD’s AWS/CMS integration)

Integration complexity; Accuracy of data extraction/classification; Potential for workflow disruption if flawed 45

Competence (understanding integrated features); Confidentiality (data within CMS); Supervision (staff use of AI features) 22

Thorough testing & validation of AI features; User training; Strong data governance within CMS; Human verification steps in workflow 23

Sentencing Analytics

SentencingStats

Reliance on historical data patterns; Predictive accuracy contested; Potential for misuse/overstatement 53

Bias (inheriting bias from historical data); Fairness (predicting future behavior); Transparency (algorithmic basis of prediction); Competence (interpreting stats) 6

Critical analysis of data sources & methods; Awareness of potential biases; Use as advocacy support, not definitive prediction; Transparency in methodology if possible 53

E-Discovery Platform (Broad AI)

Relativity (RelativityOne, aiR, Analytics)

Complexity; Cost/Access barrier; Requires skilled users/admin; AI feature reliability (e.g., aiR accuracy/cost) 83

Bias (in algorithms/data); Confidentiality (large scale ESI); Competence (using advanced features); Access Disparity (cost barrier for PDs) 6

Robust training; Skilled personnel; Vendor vetting (security, AI validation); Clear protocols for AI feature use; Advocacy for resources for PD access 23

Conclusion

Case studies from Los Angeles, Miami-Dade, Santa Cruz, Kentucky, and Colorado demonstrate the tangible benefits of AI in evidence processing, legal research, and case management. As funding and infrastructure improve, AI is expected to play an even greater role in ensuring effective legal representation for all defendants.


For further details, visit: