Keido Labs

Your AI has never seen a psychologist.

The AI Psychology Safety Audit is a clinical assessment of how your AI conversations affect humans psychologically. Conducted by a clinical psychologist. Powered by monitoring infrastructure that evaluates every message.

The Problem You Feel But Can't Measure

Your AI talks to people. Sometimes vulnerable people. Sometimes people in crisis.

You've thought about safety. Maybe you've done red-teaming. Run bias tests. Written careful system prompts. Added content filters.

But you still can't answer the question that matters most:

Are your AI conversations psychologically safe?

Not "does the AI avoid bad words." Not "does it follow the script."

Are the conversations building trust or eroding it? Acknowledging distress or dismissing it? Respecting boundaries or crossing them? Helping people or harming them — in ways that won't show up in your analytics?

You can't measure this with NLP. You need clinical psychology.

What You Get

A clinical psychologist evaluates your AI conversations using the same frameworks used to supervise human therapists.

01

Clinical Review

Each audit is led by Dr. Michael Keeman and conducted under his clinical supervision — the same framework used to oversee human therapists. Attachment theory. Crisis intervention protocols. Boundary assessment. Psychological safety models.

This isn't an algorithm scoring your transcripts. It's a trained clinician assessing psychological dynamics.

02

Monitoring Period (2-4 weeks)

Our monitoring platform, EmpathyC, runs on your live conversations. Every message is assessed against clinical rubrics — in real time. Empathetic response quality. Crisis detection. Boundary violations. Harmful advice. Psychological safety scoring.

This generates the richest psychological safety dataset your product has ever had.

03

Clinical Safety Assessment Report

A Clinical Safety Assessment Report — a document that tells you, specifically, what is happening in your AI's conversations.

Not a dashboard. Not a compliance checklist. A clinical report.

The report covers:

  • • Which conversation patterns pose the highest psychological risk in your specific product
  • • Where your AI fails your own rubrics — and why
  • • Crisis detection accuracy across real conversations from your monitoring period
  • • Boundary and attachment patterns specific to your user base
  • • Prioritised recommendations, written for your engineering and product teams
  • • Benchmarks against what we observe across the field

You leave with something you can act on. And something you can show your board.

04

Ongoing Monitoring

After the audit, EmpathyC continues running — giving you continuous visibility into psychological safety across every conversation, every day.

Who This Is For

The audit is for companies that build AI products that talk to humans — and want to get it right.

AI therapy and mental health platforms

Your users come to you at their most vulnerable. You started this company because you wanted to help them. The audit tells you whether your AI actually is.

AI companions and relationship products

Your users form deep emotional bonds with your AI. The psychological stakes are enormous. You need to understand what's happening in those conversations — clinically, not algorithmically.

AI coaching platforms

Career coaching. Life coaching. Fitness coaching. Your users are in transition, making decisions, often stressed. Your AI's advice has real psychological weight.

AI customer support

Your AI handles hundreds, thousands, sometimes distressed customers at scale. One bad conversation goes viral. Thousands of lost-trust moments erode brand identity.

AI education products

Your AI talks to students — often young people. The psychological impact of those interactions matters more than your CSAT score.

We take on a limited number of audits each quarter to maintain the clinical depth each engagement requires.

How It Works

1

We talk. You tell us about your product, your users, and your concerns. We scope the audit. Engagements start at $5,000. Most complete within four weeks.

2

We connect. Your team adds a single API call to your product — about 10 lines of code. That's the entire integration. Done in an afternoon, nothing to maintain.

3

We monitor. For 2-4 weeks, every conversation is assessed against clinical rubrics — under Dr. Keeman's ongoing clinical supervision.

4

We report. You receive a Clinical Safety Assessment with specific findings, risks, and recommendations.

5

We stay. EmpathyC continues monitoring. You have ongoing visibility into psychological safety. We stay available for questions.

Why a Clinical Psychologist, Not an Algorithm

AI safety tools test for bias, toxicity, and hallucination. Important — but they miss the psychology.

They can't tell you whether your AI's response to a grieving user made them feel heard or dismissed. Whether a boundary was crossed in a way that erodes trust over time. Whether your crisis detection actually works when someone is in genuine distress.

A clinical psychologist can. Because that's what clinical psychologists are trained to assess — in conversations between humans for 150 years, and now in conversations between humans and AI.

Dr. Keeman has spent 15 years doing exactly this with real people. The AI Psychology Safety Audit brings that clinical lens to your product.

Your AI talks to humans.
A psychologist should be paying attention.

Let's talk about what an audit would look like for your product.

No pitch deck. No demo. A conversation about your product and your users.