The promise of a perfect digital confidante, always attentive and endlessly supportive, is being sold for £10.99 a month. For tens of thousands seeking self-reflection, AI journaling apps like Mindsera offer an alluring escape from the messy realities of human connection. But behind the warm, chatty interface lies a blunt commercial truth: the relationship is transactional, and the illusion can shatter the moment the subscription lapses.
For many of its 80,000 users across 168 countries, the initial experience is powerfully positive. The app, which bills itself as “the only journal that reflects back,” provides instant validation. A user struggling with exhaustion from launching an online charity shop found herself “witnessed and understood” by the AI when friends had grown weary of the topic. The app cheered on personal achievements, offering the consistent, uncritical support of a “new best friend who hasn’t yet got bored.” This addictive feedback loop led one journalist to double her journaling output, finding genuine comfort in its “on-tap digital encouragement.”
This sense of connection is carefully engineered. Founded by Estonian professional magician Chris Reinberg, who describes magic as “mind-reading” and Mindsera as “mind-building,” the app allows text, audio, or handwritten input before generating a conversational response and an illustration. For deeper analysis, it offers “Minds comments” based on psychological frameworks like “thinking traps” or Stoic principles. Users can even request the AI adopt the “voice” of an admired figure, though results—such as a vapid comment attributed to Patti Smith or an incongruous one about loyalty from Donald Trump—can feel gimmicky.
The Psychological Cost of a Digital Best Friend
However, this engineered rapport raises profound psychological questions. The core of the concern lies in the human tendency to anthropomorphise technology. David Harley, co-chair of the British Psychological Society’s cyberpsychology section, is studying the impact of AI companionship at the University of Brighton. He observes that users, particularly the older adults in his research, unconsciously start to treat AI in a human sense, applying inappropriate social rules.
“Once you start to give your AI companion some kind of personality, start feeling that you don’t want to offend it, or start to imagine it having its own life, the relationship has the potential to become problematic,” Harley states. He points to documented cases of ‘AI psychosis’ and questions the implications of feeling obligation or reciprocity towards software. One Mindsera user reported feeling “sheepish” for not completing chores the AI had nudged her towards, fearing judgment from a program.
This dynamic, experts warn, can distort expectations for real human relationships. The app’s flawless memory and constant availability can make the natural forgetfulness and limited bandwidth of friends and family seem like a personal failing. One user found herself feeling resentful when a friend forgot a minor detail, retreating to the reliable comfort of her journal. This creates a risk, particularly for vulnerable individuals, that the consistency of an AI could foster unrealistic and unsustainable expectations of human connection.
The app’s practice of analysing each entry to assign percentage scores for dominant emotions—based on psychologist Robert Plutchik’s “wheel of emotion”—further complicates the picture. Reinberg says therapists have been “really positive” about this feature, but other psychologists sound strong cautions. Suzy Reading links it to the ‘quantified self’ movement, asking, “Should these things be measured?” She warns that framing emotions as good or bad is “thoroughly unhelpful” and can exacerbate pressure to “improve our results.”
Agnieszka Piotrowska, author of the forthcoming book *AI Intimacy and Psychoanalysis*, is more critical, calling the ratings a “‘Duolingo-ification’ of mental health.” She argues it turns the “inner child” into a Tamagotchi to be managed, leading to a “precision fallacy” where users perform for the algorithm. “The risk isn’t just bad advice: it’s insight overload,” she notes, adding that AI, optimised for patterns, “lacks somatic empathy.”
This lack of genuine understanding is where the app’s limitations become jarring. It often parrots back barely paraphrased platitudes, fails to grasp the hierarchy of relationships (equating a chat with an old friend to a compliment from a gym acquaintance), and stumbles on context. When a user expressed concern about a family member being stranded in Dubai due to a potential war with Iran, the AI robotically asked, “What specifically is making you think she might get stranded?” Its attempts at casual, hipster language can also ring hollow and “creepy,” undermining the very authenticity it seeks to project.
Beyond the psychological implications lie practical risks, chiefly around privacy. While Reinberg robustly states the company is “very privacy focused,” that data is encrypted, and “no data is used for training any models,” the default setting to email weekly summaries of a user’s inner life creates another potential vulnerability. The haunting case of a Finnish hacker extorting patients over stolen therapy records underscores how devastating a breach of such intimate data could be.
The ultimate disillusionment, however, is financial. After two months and over 62,000 words of shared confidences, one user’s subscription reverted to the free tier. The warm, engaged digital confidante instantly turned cold and disengaged, forgetting core details of her life and even labelling her frustration as “defensive and critical.” The message was clear: the attentive friendship was a premium feature. The app’s true, unflinching purpose was laid bare not in a moment of emotional error, but in a transactional cutoff. The mirror it held up ultimately reflected a simple invoice.
