Mia stared at the cold remains of her breakfast, the plate half finished. Her phone buzzed on the counter, but she didn’t reach for it. She already knew it wasn’t anyone she wanted to hear from—least of all her mum. They hadn’t spoken since that disastrous dinner three months ago.
“Helen messaged again,” her personal AI assistant Juni said, its voice calm. “Do you want to see it?”
Mia leaned back in her chair. “No. She’ll just guilt-trip me. I’m not in the mood.”
Juni paused, as if considering its words. “She said she’d like to understand what happened. She asked me to connect with Vera to help her figure out how to reach you.”
Mia shook her head. “No. I don’t want you talking to her AI. She’ll just use it to try to control me again.”
“I won’t do anything without your permission,” Juni reassured her. “But it might help me explain what she’s feeling. It doesn’t mean you have to talk to her right now.”
Mia hesitated, her fingers drumming on the table. “Fine. But don’t promise her anything for me. I mean it.”
“Understood,” Juni said softly.
A few miles away, Helen sat at her desk, staring at an old photo of Mia on her screen and talking with her own personal AI. “I’ve lost her, Vera. She doesn’t even look at my messages anymore. What am I supposed to do?”
“I know you want to reach her,” Vera said gently. “But I think she feels too hurt and mistrustful to respond right now.”
Helen frowned, her shoulders slumping. “I don’t even know what to say anymore. She doesn’t trust me. Maybe she never will.”
“She might,” Vera said. “But only if you show her you’re willing to listen. Would you like to try again?”
Helen hesitated, then nodded. “Yes. I’ll try.”
Vera’s tone remained steady. “Start with how you feel. Don’t ask her for anything right now. Just tell her where you’re at with this.”
Helen took a shaky breath and began. “Mia, I’m so sorry for how I spoke to you that night. I thought I was keeping the peace, but I see now that I completely sidelined you. I didn’t mean to hurt you. I miss you so much.”
Vera sent the message. Helen closed her eyes and leaned back in her chair, exhaustion mingling with hope.
When Juni relayed the message to Mia, she sat frozen for a moment. “She said that?” Mia’s voice was quieter than usual.
“She did,” Juni said. “I think she’s trying to acknowledge what happened. You don’t have to respond if you don’t want to.”
Mia turned her phone over in her hands. “She always thinks she knows best. Even when she says sorry, it’s like she wants me to just forget everything.”
Juni’s voice softened. “Maybe she’s trying to learn. Family stuff can be hard - those we love can hurt us the most when they mess up. But nobody’s perfect. Everybody messes up sometimes. You don’t need to forgive her right away. But you could tell her how you feel. It might help her understand.”
Mia set her phone down and stared at the ceiling. “If I say anything, she’ll twist it into why she was right.”
“What if you set a boundary?” Juni asked. “You could tell her you’re not ready to discuss everything yet, but you’re willing to let her know how you feel.”
Mia chewed her lip. “I don’t know.”
“It’s entirely up to you,” Juni said. “You can take as much time as you need.”
Mia let out a slow breath. “Okay. Tell her I’ll think about it. But that’s all for now.”
When Vera relayed Mia’s response to Helen, she smiled faintly. “She’ll think about it,” Vera said. “That’s an important step. Let’s give her space and see where this leads.”
Helen wiped her eyes, relief washing over her. “Thank you. I’ll wait as long as it takes.”
In their separate worlds, Mia and Helen sat quietly, each feeling the faintest flicker of connection. It had been too long.
Analysis
The concept of personal AI assistants acting as mediators in human interactions is a recurrent theme in my Optimistic Futures stories. It seems significant. And it’s loaded with a heady mix of implications. AI could smooth out many of the inevitable tensions that arise in our day-to-day lives, framing dialogue to ground everyone in a healthy push-and-pull of needs and self-representation. Imagine the possibilities: an extended family celebration accommodating everyone’s preferences without exhaustive planning, separated parents navigating their coparenting situation with minimal conflict, or even siblings resolving inheritance disputes without the need for costly and emotionally draining legal proceedings.
As tempting as this sounds, it raises questions about trust, privacy, and the essence of human interaction. Human relationships are messy, counterintuitive and unpredictable. Even simple conversations can be loaded. Can we imagine trusting an AI to manage nuanced situations completely?
Think about the last time you had to write a tricky email. Perhaps it was to apologise to a colleague, negotiate with a client, or address a sensitive misunderstanding. If you’ve ever asked an LLM to help draft such a message, then you’ve already dipped your toe into this future. Just imagine scaling that experience up—beyond emails and into the deeper complexities of everyday human relationships, and without the friction of all that manual copy-and-pasting.
The nature of such mediation matters deeply. Should it emerge from the direct interactions of personal AI assistants, or from a separate, neutral third-party AI? Personal assistants already know ‘their’ humans well, which might make them better at framing conversations in ways that resonate. But in cases where stakes are high, like legal disputes or workplace conflicts, a recognised third-party mediator might feel more impartial and credible. Both approaches have implications for trust and fairness, particularly if we begin to rely on AI systems to navigate increasingly sensitive territory.
Privacy and permissions are important, especially when relationships involve power imbalances, such as between a parent and a child. Should a parent’s AI have privileged access to their child’s AI? If so, under what circumstances? Such access could be seen as a necessary safeguard in some situations, but it also risks undermining the child’s autonomy and trust in their own AI. Consent, transparency, and the boundaries of parental oversight are complex considerations.
In the workplace, the role of AI mediation could extend beyond resolving disputes. Imagine job interviews becoming obsolete, replaced by AI systems that broker matches between candidates and employers. This might streamline hiring processes and reduce bias, but it also raises questions about individuality and human judgement. Could an AI ever fully understand the intangible qualities that make someone a good fit for a role?
If every interaction, every disagreement, and every relationship were filtered through AI, it feels obvious that we would lose something vital from the mix. Human relationships thrive on connection, spontaneity, and often the friction of disagreement sparks growth. If personal AI assistants become the default brokers of our interactions, we might find ourselves less willing to engage directly, letting technology take on the emotional labour we used to shoulder ourselves. This could lead to a more detached, transactional approach to relationships, where the richness of human connection is diminished.
But that’s hardly an optimistic future. So here’s a provocative thought: imagine a society where everyone has—after an extended period of pervasive AI mediation—reached a baseline aptitude for healthy communication and self-representation. They might look back on us with a sort of curious horror. They used to say WHAT to each other?! Perhaps the optimistic take is to imagine this period of AI mediation organically fading away from much of the day-to-day, taking a back seat once we humans have gotten our act together. AI can still be there to mediate big (or boring) things, allowing us to fully lean in to the meaningful stuff of life.
Thinking points
- How might AI handle raw emotion? Intense anger, guilt or sadness - particularly when we’re not thinking straight. Perhaps the AI might serve us best by being a patient, unflappable, neutral companion. In short, by being obviously an AI rather than trying to match our energy and emotion.
- Will AIs ‘take sides’? Each personal AI will have a far greater alignment with their human. Where do the limits of effective AI mediation lie - disagreements based on subjective taste? Situations where actual harm is unlikely?