AI that serves lawyers.
Not the other way around.
FRITH builds AI features with a clear set of ethical principles. This page is our public commitment to responsible AI in legal practice — what we do, what we don't do, and how we're held accountable.
Last reviewed: March 2025
Our six AI principles
These principles govern every AI feature we build and every AI model we integrate.
Transparency
FRITH tells you which AI model is being used for every query. We distinguish AI-generated content from human-authored content in the interface. We do not obscure the nature of AI outputs.
- Model name and provider displayed on every AI interaction
- Confidence indicators shown where available
- AI-generated document sections are visually differentiated
- Audit logs record every AI query with timestamps and model used
Human in the loop
FRITH's AI augments lawyer judgment — it never replaces it. Every AI output requires human review before it reaches a client or is filed. We design friction into consequential AI actions deliberately.
- No AI output is sent to clients automatically without attorney review
- AI-generated documents require explicit "approve and send" actions
- AI is presented as a drafting and research assistant, not a final authority
- Users can always override, edit, or reject AI suggestions
Your data is never used to train AI
FRITH does not train AI models on your firm's matters, documents, client communications, or any data entered into the platform. Your data exists only to serve you — not to improve AI for third parties.
- Zero training data sharing with any AI provider from customer data
- BYOK model ensures queries go directly to your chosen provider under your key
- Contractual prohibitions with all AI sub-processors on training use
- Ollama self-hosted option means AI never leaves your infrastructure
Privacy-preserving architecture
FRITH's BYOK (Bring Your Own Key) model means your AI queries are sent from your browser to your chosen AI provider using your own API key — FRITH never sees the content of AI queries in transit.
- BYOK supported for OpenAI, Anthropic, Groq, Gemini, and Ollama
- API keys stored encrypted with AES-256; never logged in plain text
- AI query content is not logged by FRITH (only metadata: timestamp, model, token count)
- Enterprise customers can run fully self-hosted AI with Ollama
Honesty about limitations
AI models make mistakes. Legal AI makes consequential mistakes. FRITH is designed to surface uncertainty and discourage over-reliance on AI outputs in high-stakes legal contexts.
- AI research outputs include source citations so lawyers can verify
- Hallucination warnings displayed for complex legal analysis tasks
- AI templates are starting points — not finished legal products
- We do not claim AI outputs constitute legal advice
Accountability
FRITH maintains an internal AI ethics review process for all new AI features before release. We designate a responsible person for AI governance and take reports of AI harms seriously.
- Internal AI ethics review before shipping any new AI capability
- Designated AI governance contact: ai-safety@frithai.com
- Incident process for reported AI harms or misuse
- Annual review of AI principles against evolving legal and regulatory standards
Lines we will never cross
These are explicit prohibitions, not aspirational — they are hard constraints built into how we design and deploy AI features.
Why BYOK is an ethical choice
Bring Your Own Key (BYOK) puts AI governance back in your hands. Instead of routing your queries through FRITH's API layer, BYOK means your firm's AI queries go directly to the model provider using your credentials — FRITH never intercepts them.
Questions about our AI practices?
Contact our AI governance team at ai-safety@frithai.com or read our Trust Centre and Security page for more detail.