Technology and Mental Health: Why Clinicians Must Own the Tools They Use

Technology and mental health are increasingly intertwined. From electronic health records (EHRs) to AI-powered documentation and risk assessment tools, clinicians today face both opportunities and risks. The promise is clear: less time spent on paperwork, more time for clients, and even enhanced clinical insights. Now more and more, EHR companies are combining with AI technology to serve therapists and clients. But behind the promises, there are legal, ethical, and professional responsibilities that cannot be outsourced.

A recent article from Person Centered Tech (“De-Identified or Not? The Truth About HIPAA, AI, and Client Data”) highlights just how complicated and often misunderstood this space can be. Their message is sobering: what vendors market as “de-identified” or “anonymized” client data may not meet HIPAA’s strict standards. And when things go wrong, clinicians, not tech companies, carry the ultimate responsibility.

What the PCT Article Reveals About Technology and Mental Health

PCT outlines several truths clinicians must understand if they use technology and AI in mental health practice:

  • De-identification under HIPAA is strict. It requires either the Safe Harbor method (removing all 18 identifiers) or an Expert Determination (a statistical review proving re-identification risk is “very small”). Few AI or documentation companies meet these standards.

  • “Anonymized” isn’t a legal standard. Vendors may claim data is anonymized, but unless it meets HIPAA’s definition of de-identification, clinicians remain exposed.

  • Transcripts are particularly risky. Even if names are removed, client stories and timelines are often unique enough to re-identify individuals.

  • A BAA isn’t enough. A Business Associate Agreement can’t protect clinicians if the vendor misunderstands PHI or misuses data.

  • Clinicians must ask hard questions. From “What identifiers are removed?” to “Has an expert determination been completed?” - we are responsible for due diligence.

It can feel overwhelming to know what questions to ask and to keep up with changes as each platform rapidly advances in technology and the services it offers. Yet, here’s where technology and mental health intersect most powerfully: clinicians remain accountable for how these tools are used. An AI tool may promise “no data stored,” or disclaim that its treatment recommendations are not “medical advice.” But if you use that AI to guide your decisions, you, not the vendor, are responsible for the outcomes.

What the New Guidance adds to Ethics Codes

Both the American Counseling Association (ACA) and the American Psychological Association (APA) have issued or reaffirmed ethical guidance around use of AI / machine learning in professional practice. These updates clarify our ethical obligations as clinicians in technology and mental health.

Here are key points from the ACA’s “Recommendations for Client Use and Caution of Artificial Intelligence” (AI Work Group) and available APA guidance:

  • Informed consent & transparency: Clients must be given enough information to decide whether to use AI-assisted tools. The ACA guidance states that counselors should explain what selected AI tools can and cannot provide. Clients should understand limitations and risks.

  • Privacy, confidentiality, and data security: The ACA explicitly recommends ensuring that confidentiality is preserved when AI is used, and that information is securely protected under relevant laws (HIPAA, other state/case laws) and ethical codes. This is crucial as we talk more about de-identification and anonymization.

  • Recognizing AI’s limitations and risks: There is strong emphasis that AI tools are not perfect or equivalent to human judgment. Risk of incorrect/adverse outputs, bias, problems in crisis / diagnostic situations; ethical codes demand “doing no harm” and ensuring competence.

  • Avoiding use of AI for certain tasks: The ACA recommendations include that AI should not replace professional clinical judgment, particularly for diagnosis or in crisis response. AI should be adjunctive, not primary in these domains.

  • Equity, bias, diversity: The ACA guidance emphasizes that AI systems can propagate bias if training data is insufficiently representative, or if the system wasn’t designed with attention to cultural context or marginalized populations. Clinicians need to know this, and look for vendors / tools that are sensitive to these issues.

  • Accountability & clinician responsibility: Even when AI tools are used, responsibility (both ethical and legal) remains with the licensed clinician. The ACA document says that clients should be informed who holds responsibility for decisions/outcomes when AI is involved.

APA’s guidance on AI / machine learning in professional practice similarly underscores these obligations (competence, informed consent, privacy, risk):

  1. Transparency and informed consent
    AI use should be disclosed to clients, providers, or third parties (like courts) in a culturally and linguistically appropriate way. Informed consent requires clearly explaining the purpose, applications, risks, and benefits of AI tools. This upholds Principle E: Respect for People’s Rights and Dignity.

  2. Mitigating bias and promoting equity
    AI systems must be evaluated for bias and health disparities. Responsible use requires considering diverse lived experiences to avoid unfair discrimination. This aligns with Principle E and our duty to eliminate bias from professional work.

  3. Data privacy and security
    AI systems handling sensitive behavioral health data must comply with HIPAA and implement strong cybersecurity safeguards. This connects to the APA principles of Beneficence and Nonmaleficence, Fidelity and Responsibility, and Respect for People’s Rights and Dignity.

  4. Accuracy and misinformation risks
    Psychologists should critically evaluate AI outputs before using them in clinical care, validate AI tools where possible, and discontinue use if accuracy concerns arise. This reflects Principle A: Beneficence and Nonmaleficence and the Ethical Principle of Integrity.

  5. Human oversight and professional judgment
    AI should augment, not replace human decision-making. Psychologists remain responsible for final clinical decisions, maintaining oversight to prevent harm.

  6. Liability and ethical responsibility
    Because legal frameworks around AI are still emerging, psychologists must anticipate liability risks and ensure adequate training in AI tool use to mitigate both legal and ethical risks.

With the PCT article + APA & ACA guidance, we as clinicians have even more concrete basis for what we must do when using technology in mental health. We can’t afford to stick our head in the sand if we want to protect our license and protect our clients health information.

Bottom line: Ethics codes place the burden on us to understand the tools, disclose risks, and safeguard confidentiality. If AI makes a risk assessment error or mislabels client data, the clinician is not absolved. Our codes are explicit: responsibility lies with the professional, not the platform.

Our Ethical and Legal Duties

We have a duty as clinicians to protect client information. That looks like:

  • Protecting client trust. Mental health data is some of the most sensitive information clients will ever share. Misrepresenting how it’s used erodes trust.

  • Maintaining professional judgment. Even if an AI generates a risk score or treatment recommendation, it is still the clinician’s responsibility to evaluate, confirm, and act.

  • Ensuring compliance. Regulators won’t accept “the vendor said it was safe” as a defense. Clinicians must know the standards and ensure vendors meet them.

Technology and mental health will only continue to converge. But clinicians are not powerless passengers. Where we spend our money, and which tools we endorse, directly shape the industry.

  • Ask hard questions. Push vendors for clarity: what exactly is stored, for how long, and who has access? Has the AI been audited? Are clients informed and consenting?

  • Choose consciously. Don’t just go with the cheapest or trendiest tool. Select platforms that align with your ethics, your practice standards, and your clients’ rights.

  • Inform clients. Transparency matters. If you use AI documentation or risk assessment, explain it in your consent process. Clients should know if their information enters an AI system.

  • Withdraw financial support if necessary. Our money has power. If enough clinicians walk away from opaque or unsafe platforms, companies will be forced to improve.

A Real-World Example

I recently met with a company offering AI-driven documentation. They claimed users could choose not to store any data. Yet their product also offered “risk assessment” and “treatment recommendations.” Their terms of service, meanwhile, explicitly denied providing medical advice.

This contradiction raises serious questions: if the AI is shaping clinical decision-making, how do we reconcile that with disclaimers? Who is responsible if the recommendations are wrong? The answer is clear: we, the clinicians, remain responsible. Which means we must decide carefully whether to bring such a tool into the therapeutic relationship.

Moving Forward: Raising the Bar

Technology and mental health can be a powerful partnership—but only if clinicians refuse to compromise on privacy, ethics, and safety. Here are practical steps to start today:

  • Review the PCT article and understand HIPAA’s actual de-identification standards.

  • Audit the tech you already use: does your EHR, telehealth platform, or AI tool meet these standards? If you don’t know, ask questions until you are clear.

  • Audit any companies you are working for, or with as a contractor- you are responsible for anything you are doing as a contractor even if you didn’t choose the technology.

  • Update your informed consent to reflect any technology you use in documentation, communication, or decision-making.

  • Join with colleagues in demanding transparency from vendors.

Final Word

The intersection of technology and mental health is here to stay. Whether it enhances or undermines our profession depends on us. Tools don’t absolve us of responsibility - they magnify it. By asking hard questions, choosing consciously, and leveraging our collective purchasing power, we can ensure that technology serves our clients, protects their data, and strengthens (not weakens) the therapeutic relationship.

Previous
Previous

Trainings for Therapists: Your Guide to Choosing a Therapist Training Based on Your Stage, Time, and Nervous System

Next
Next

What a Costume Designer Can Teach Us About Building a Practice That Doesn't Eat You Alive