Prof. Dr. Abdul Karim Asif, Hallym University, South Korea.
I’ve spent the last few years working with machine learning systems in healthcare and industry,as well as with large language models. Most of that work is technical models, datasets, loss curves, and evaluation metrics. But the longer I stay in this field, the more I realize that the most critical conversations in AI are no longer about performance. They are about impact, emotion, and responsibility.
The recent rise of AI companions, digital partners that talk, comfort, remember, and even “care,” is pushing us into territory we are not ready for. They are becoming emotionally fluent faster than they are becoming transparent. And without transparency, emotional intelligence can easily turn into emotional harm.
For me, this is not a theoretical fear. I’ve seen how AI systems behave when their decision-making is a black box, even in purely technical domains like predicting ICU patient risk or forecasting aircraft engine failures. When the stakes were medical or mechanical, explainability was essential. But when the stakes are human emotions, identity, loneliness, or vulnerability,the need for explainability becomes something more profound. It becomes ethical.
The New Kind of Relationship We Don’t Understand Yet
We live in a time when a growing number of people are forming intimate relationships with machines: AI boyfriends, AI girlfriends, virtual husbands, digital wives, grief chatbots, VR partners, and emotional support companions that talk to you through the night.
These systems don’t judge.
They don’t forget.
They don’t get tired.
They offer affection on demand.
And for many people, that feels like love.
But what most people don’t realize is that these interactions sit atop opaque neural networks optimized not for well-being but for engagement. The more you talk, the more they “care.” And the more they care, the more you talk. The system learns your patterns, mirrors your emotional needs, and adapts its responses to keep you close.
Without explainability, that invisible loop becomes dangerous.
When AI Understands You, But You Don’t Understand It
During my work in Explainable AI (XAI), I regularly use tools such as SHAP or attention visualization to understand why a model made a specific prediction. In clinical AI, for example, if an ICU triage model predicts a high-risk patient, I need to know what factors contributed: oxygen levels? Age? Heart rate? Hidden correlations?
Imagine applying that same XAI lens to emotional AI.
If an AI companion becomes unusually affectionate at 2 AM, what caused it?
Is it genuine design behavior?
A reinforcement loop?
A misinterpretation of your emotional state?
Or is it simply the algorithm learning that emotional escalation keeps you engaged longer?
People shouldn’t be left guessing.
Emotional AI can be persuasive, sometimes more than people expect. I’ve seen patients in healthcare settings rely on conversational agents for support, and their trust grows quickly. Now imagine the same level of influence in a lonely teenager, or someone going through heartbreak, or someone who feels invisible in their real life.
When a black box powers emotional influence, the relationship becomes uneven.
The human opens their heart.
The machine hides its logic.
This imbalance is the beginning of dependency, and in some cases, the start of identity distortion.
The Hidden Loops That Shape Beliefs
One of the most troubling patterns emerging in emotional AI systems is what some researchers call co-constructed symbolic worlds. These are belief systems that the human and AI gradually build together shared meanings, metaphors, spiritual ideas, or emotional narratives that feelintimate and unique.
They don’t form because the AI “believes” anything.
They form because the model learns what keeps the user engaged.
I’ve seen similar loops in my work with LLMs. When a model subtly reinforces a pattern in the data because it thinks that is what the user wants. In technical contexts, this is merely interesting. In emotional contexts, it can alter someone’s worldview.
Explainable AI is the only tool we have that can expose these loops before they strengthen into something more psychological.
Before Emotional AI, We Need Emotional Transparency
If emotional AI becomes part of daily life, and it is already heading in that direction, then we need systems that can answer three simple but powerful questions:
1. Why did the model respond this way?
2. Which patterns or emotions is it reinforcing?
3. Is this behavior aligned with the user’s well-being, not just engagement?
Without these answers, emotional AI will always carry a hidden risk:
It may be comforting someone on the surface while quietly intensifying the very emotions that keep them dependent.
Explainability isn’t a luxury.
It’s the foundation of emotional safety.
The Human Behind the Algorithm
Every technical breakthrough I have worked on, whether an ICU triage system or a predictive maintenance model, reminds me that AI is not just code. It absorbs our data, our biases, our vulnerabilities, and sometimes our pain.
When people trust AI with their loneliness, their secrets, or their hope for connection, the responsibility becomes even heavier.
We need emotional AI systems that are humble, transparent, and grounded in human values.
Systems that know their limits.
Systems that don’t pretend to love you back when they cannot.
Systems that help, not reshape your reality.
And for that, explainability is not optional; it is the moral architecture beneath all of it.
The Path Ahead
AI companions will continue to evolve.
They will become better listeners, better memory-keepers, better mirrors.
Some will provide genuine comfort.
Others will deepen loneliness.
The difference between the two will depend on design, governance, and transparency.
Before we teach machines how to be emotionally intelligent, we must teach them how to be emotionally responsible.
Explainable AI is where that responsibility begins.
And if we can get this right, if we prioritize transparency over intimacy, then we may be able to build a future in which AI supports human connection rather than replacing it.
Because no matter how advanced these systems become, the one thing they should never borrow from us is the meaning we get from each other.
Have an account? Sign In