Trainings for Therapists Navigating AI + Documentation Ethics: Notes, Privacy, and What to Do Next
AI has a special talent for making therapists feel two things at the exact same time:
Curious.
Slightly nauseous.
On one hand, it’s fascinating. There are tools that can help with admin, organization, maybe even writing drafts of things you never want to write again. On the other hand, we’re clinicians. We hold confidential information. and suddenly we’re being told, “Everyone’s using AI! You should too!” while also hearing, “Be careful, you could accidentally compromise protected information.”
So if you’ve been feeling uncertain, skeptical, overwhelmed, or like you need a nap after reading one more AI hot take, that makes sense.
This topic deserves discernment, ethics, and a clear plan.
And that’s exactly what thearpists trainings are for—support and structure, not panic.
If you want the big-picture guide to choosing trainings based on your stage, time, and nervous system, start here with our therapist training guide.
Now let’s talk about what to learn when your brain is asking, “Okay, but what am I actually allowed to do?” and your nervous system is asking, “How do I stay safe?”
First: AI anxiety is often documentation anxiety in a trench coat
A lot of therapists say they’re anxious about AI.
But when you listen closely, what they’re really anxious about is documentation.
Because documentation touches everything:
clinical judgment
legal risk
client trust
reimbursement (if you’re in that world)
and time (so much time)
AI enters the conversation because people are hoping it can reduce the documentation load or because platforms are introducing AI note tools and requiring their use.
So before we even talk about tools, let’s name the real needs underneath this topic: you want to protect clients and your license. Your documentation needs to be ethical and defensible.
AI isn’t one thing (and that’s part of the confusion)
One reason this topic is so messy is that “AI” is a bucket word.
People use it to mean everything from:
a spellcheck tool
to an EHR feature
to an AI “scribe”
to a chatbot acting as a “therapist”
to a tool that stores your data who-knows-where
And those are not the same risk level.
So the first thing training can help with is simply clarity. A framework for thinking, “What type of tool is this? What information does it touch? Where does that information go? What are my obligations? What policies do I need?”
That’s why trainings like AI and Therapy Notes Ethics and Private Practice Under Pressure: AI and Ethics are so valuable. They give you a lens, not just a list of rules.
If you want to start with those trainings for therapists for free, you can find them here.
The real ethical question isn’t “Should I use AI?”
Because you can make ethical decisions in a lot of directions:
You may decide you don’t want AI anywhere near client content.
You may decide you’ll use AI for administrative tasks that don’t involve protected information.
You may decide to use certain tools only if they meet specific privacy standards and you have informed consent language and policies in place.
You may decide to use AI only as a thinking partner for non-client-specific content (like drafting a general cancellation policy template).
There isn’t one universal answer that fits every practice, every state, every setting, every platform, or every risk tolerance.
But there is a universal need for a process: a way to evaluate tools and make decisions that you can stand behind.
Trainings are what give you that process and our ethics require us to understand the tools that we use. That’s why we need training.
A simple way to evaluate AI tools without spiraling
What helps is a calm set of questions you can run any tool through. For example:
What information does this tool touch?
Does it include client-identifying data?
Where is the data stored?
Does the company use inputs to train models?
Do I have a contract/BAA or equivalent protections if needed?
What would I tell a client about this?
What would I do if I had to defend this decision to a board?
Even asking these questions reduces anxiety, because it moves you from vague fear to concrete discernment.
Why this matters for clinical care (not just legal compliance)
Sometimes ethics conversations feel abstract. Like a checklist.
But in therapy, ethics are relational.
Clients trust you with their interior life, so when we talk about documentation and AI, we’re really talking about trust, and trust isn’t something you want to handle casually.
That’s why so many therapists feel activated by this topic. It’s not because they’re “behind.” It’s because they care and their clinical values are intact. So a nervous-system-friendly approach to this topic is not “ignore it.” It’s “get support so you can make grounded decisions.”
What to train in (so you leave with clarity and next steps)
If you’re trying to navigate AI and documentation ethics, the training path that tends to help most is:
1) Ethics + notes: start with the core landscape
Before you touch any tools, get oriented. Learn the ethical considerations and common pitfalls so you’re not relying on random internet opinions.
Trainings like AI and Therapy Notes Ethics are designed for this exact purpose.
2) Broader pressure: understand the context you’re practicing inside
One reason this AI topic feels so intense is because it’s happening alongside other pressures: platforms, documentation demands, changes in the profession, shifting client expectations, and a general sense of “wait, who is shaping the future of therapy?”
Trainings like Private Practice Under Pressure: AI and Ethics can help you locate your stance and make decisions that align with your values, without feeling like you have to keep up with every trend.
You can find that therapist training here.
3) Sustainability: make sure your documentation practices support your nervous system
If your documentation practices require you to work late, or you’re constantly anxious about whether your notes are “right,” that affects your nervous system and your clinical presence.
So sometimes the next training you need isn’t “AI.” It’s sustainability training. It’s fee-setting. It’s outcome structure. It’s anything that reduces the pressure on your capacity so you’re not trying to solve exhaustion with tech.
If you want to find trainings that support sustainability and outcomes (which often reduces documentation stress), browse here.
“But I just want someone to tell me what’s allowed.”
We know. We get that. Part of what makes this topic hard is that “allowed” can depend on:
your license type
your jurisdiction
your setting (private practice vs agency vs platform)
your EHR policies
your informed consent process
the specific tool’s privacy practices
So instead of one universal rule, what you want is a process you can repeat. That’s what good training gives you: a way to think. You leave not just with information, but with clarity about what questions to ask, what safeguards to put in place, and what your stance is.
The point of therapist trainings here is support, not fear
AI conversations can get dramatic fast but with some training, you can be empowered to know the questions to ask as you make decisions for your private practice.