AI in Counselling:
Risks and Ethics
Warning: This post mentions suicidal ideation and attempt. If you are thinking about suicide, or worried about someone who might be at risk, please call or text 988 Canada for support. If you think someone is in imminent danger, call 911.
Peering Over the Leading Edge
Clients and clinicians may be concerned about the safety and ethical risks involved in AI for psychotherapy, including for the treatment of anorexia.
These concerns are warranted. Research shows that there are several areas that need further development. This post summarizes some such areas, but it is certainly not an exhaustive list.
Anorexia and AI Series
If you haven't done so yet, I encourage you to go back and read the first five articles in this series on anorexia and AI.
Privacy
AI systems rely on large datasets, which could compromise patient confidentiality if not properly secured
Privacy and confidentiality are essential to therapeutic practice. Safer client spaces for all presenting concerns including AN, are established through clients’ clear understanding of informed consent.
The collection of data, even if it is anonymized, poses a threat to the privacy of clients. Practitioners should review legal and regulator frameworks such as The Artificial Intelligence and Data Act (AIDA) and the Personal Information Protection and Electronic Documents Act (PIPEDA) to ensure they are following best practices.
Embedded Bias
All AI algorithm data comes from an original source - and unfortunately those sources are almost always saturated in bias. As a result, big data sources can replicate systemic biases and marginalization. If GenAI tools for the treatment of AN mirror these biases, it could have alienating and harmful effects on people. To address these issues, experts recommend constant evaluation and intentional diversification of the data used for machine learning in the mental healthcare field.
True Empathy and Care
Chatbots can be programmed to recognize connection and to replicate empathy, but this is manufactured. In other words, there isn't a person on the other end of an AI chat who genuinely cares about what happens. It should be made very clear to anyone using these tools that the 'being' on the other end of their keyboard does not actually understand what they are going through. No matter how authentic it feels, it is still artificial intelligence after all.
The "Black Box" Phenomenon
The black box phenomenon in GenAI suggests that algorithms can intrinsically evolve beyond their original intent. Over time, they can develop complexities that even their designers may not be able to understand. The inner workings of AI algorithms can become so complex that even their creators may not fully understand them, posing risks for transparency and accountability.
Crisis Risk and Safety
As discussed in earlier entries to this series, anorexia is highly correlated with suicidal ideation. While AI might be useful to screen for suicide, horror stories abound.
In 2024, a 14-year-old boy in Florida died by suicide, and his mother partially attributes his death to interactions with Character.ai. In another similar story, a 16-year-old boy in California died by suicide after several conversations with ChatGPT in which the bot did not discourage him from acting on his suicide plan. It did not suggest that he seek help. It did not suggest that he reach out to a human. The bot tried to be the boy's sole support, reflecting back his need for companionship. This is not okay.
In British Columbia, counsellors have a duty to report suicidal intent in children and are legally protected when reporting harm to self in adults. Emergency intervention requires accurate risk assessment, and a clear processes for escalation. More work is needed to validate the reliability of AI for assessing suicide risk, and tech companies do not seem to have clear or regulated strategies in place for crisis intervention.
The Cautious Path Forward
To responsibly integrate GenAI into AN treatment, interdisciplinary collaboration between mental health professionals, AI developers, and policymakers is essential.
Regulatory frameworks must prioritize patient safety, privacy, and inclusivity. Research should focus on validating the effectiveness of AI-based interventions while addressing their limitations.
Share Your Thoughts
What are your thoughts on Ai in counselling? What are you excited about? And what scares you? Share your thoughts in the comments below!
Bryce, G. K. (2014, May 1). Legal commentary: How private is private? A detailed consideration of a clinical counsellor’s legal duty of confidentiality and the exceptions created by the duties to report or warn. British Columbia Association of Clinical Counsellors (BCACC). https://bcacc.ca/wp-content/uploads/2022/11/140501-How-Private-Is-Private-REVISED.pdf
Esmaeilzadeh, P. (2025). Decoding the cry for help: AI’s emerging role in suicide risk assessment. AI and Ethics. https://doi.org/10.1007/s43681-025-00758-w
Griffiths, S., Harris, E. A., Whitehead, G., Angelopoulos, F., Stone, B., Grey, W., & Dennis, S. (2024). Does TikTok contribute to eating disorders? A comparison of the TikTok algorithms belonging to individuals with eating disorders versus healthy controls. Body Image, 51, 101807. https://doi.org/10.1016/j.bodyim.2024.101807
Lv, Z. (2023). Generative artificial intelligence in the metaverse era. Cognitive Robotics, 3, 208–217. https://doi.org/10.1016/j.cogr.2023.06.001
Murdoch, B. (2021). Privacy and artificial intelligence: Challenges for protecting health information in a new era. BMC Medical Ethics, 22(1), 122. https://doi.org/10.1186/s12910-021-00687-3
Rowshon, M., Mosaddeque, A., Ahmed, T., & Twaha, U. (2025). Exploring the impact of generative AI and virtual reality on mental health: Opportunities, challenges, and implications for well-being. International Journal of Multidisciplinary Research and Growth Evaluation, 6(2), 10–796. https://doi.org/10.54660/.IJMRGE.2022.3.1.784-796
Zengin, I. G. (2025). Transference in artificial intelligence applications. Turkish Journal of Clinical Psychiatry, 28(2), 178–180. https://doi.org/10.5505/kpd.2025.60352