Ben sat across from me, explaining how his low motivation, lethargy and trouble sleeping seemed like depression from content he had seen online. I made a recommendation to get his bloodwork done with his GP, who advised that Ben was low in vitamin D and iron, which can mimic depressive symptoms. Under the care of his GP, Ben’s symptoms quickly resolved without requiring further psychological intervention.
Thuy made an appointment with me, armed with information and old school and university records after her colleague was diagnosed with attention deficit hyperactivity disorder. After going through the assessment process, I diagnosed her with inattentive ADHD, a commonly underdiagnosed condition among women and girls. Thuy was relieved and felt as though her life finally made sense to her, after years of assuming she was “just lazy”.
These two cases are fictional amalgams, but they illustrate a new ritual that has become commonplace in my clinical work. Clients no longer just describe their symptoms – they often arrive with printouts, screenshots of dense articles, some AI chatbot information and the phrase “I’ve done my research”. Make no mistake, I am fully supportive of people trying to make sense of their mental health symptoms, and too often when there are comorbid physical and mental health issues people have been turned away from health professionals without the care and support they need. Often, like Thuy, people can be correct with their hypotheses. Also often, like Ben, they can be incorrect.
The Rise of the Amateur Health Expert
This client-led research is empowered by the internet’s vast library yet unaided by guidance about how to interpret the information. We are witnessing the rise of the amateur health expert, a well-intentioned but at times costly role. Taking an active interest in your health is positive but the democratisation of information, without the concurrent democratisation of critical research skills, has created a perfect storm for misinformation.
The scale of the trend is striking. Research indicates that nearly half of people in the UK have self-diagnosed at least once in the past year. Among 16- to 24-year-olds, 18% have self-diagnosed at least four times in the last 12 months. The primary drivers are difficulties in securing timely NHS appointments, cited by 36% of respondents, and concerns about long waiting lists, at 22%. Health anxiety also plays a significant role, with 21% self-diagnosing due to worries about their health. A third of individuals, particularly younger ones, turn to social media platforms such as TikTok and Instagram for health information. While social media can increase awareness and destigmatise mental health discussions, it also contributes to misinformation and the “medicalisation of everyday distress.”
Artificial intelligence is adding another layer of complexity. Almost two-thirds of the UK population – 63% – use AI for symptom checks, followed by researching side effects (50%) and treatment options (30%). Among 18- to 24-year-olds, more than three-quarters (85%) regularly search for health information using AI. Despite this heavy use, only 11% say it has significantly improved their health situation, while a quarter of people feel more comfortable using AI than having a face-to-face appointment with a health professional.
What can follow this self-directed research is half-understood statistics, cherry-picked case studies, viral social media threads and anecdotes masquerading as legitimate data. I have seen anxiety spiral from misreading a side-effect profile and depressive withdrawal justified by a misinterpreted, dangerously low-quality study. We are drowning in data but missing vital how-to knowledge. The consequence is individual confusion and a collective erosion of trust in the scientific process, fuelled by cognitive biases that run rampant in online echo chambers. Confirmation bias, for example, leads us to seize upon the one outlier study that confirms our fears. The Dunning-Kruger effect allows a few hours on YouTube to foster an illusion of expertise that dismisses experts who have decades of clinical training.
The conditions people are trying to make sense of are familiar. Nearly a third of those who self-diagnose do so for a mental health condition, including depression, anxiety, OCD and eating disorders – with rates most acute among young people, over half of whom (52%) self-diagnose a mental health condition. Vitamin D deficiency, meanwhile, can produce symptoms that easily mimic depression: low mood, irritability, fatigue, muscle weakness, bone aches and joint pain. The National Institute for Health and Care Excellence recommends that all adults in the UK take a daily supplement containing 400IU (10mcg) of vitamin D throughout the year, given the lack of sunlight. Iron deficiency also presents with tiredness, lethargy, shortness of breath, heart palpitations and a pale complexion – all of which can be mistaken for psychological issues. And ADHD, especially the inattentive type, is frequently underdiagnosed in women and girls, who may present with forgetfulness, distractibility, difficulty with organisation and mental fog, often dismissed as laziness.
How to Read the Evidence – and What to Ask
For many, research has become synonymous with reading or searching online. For scientists, reading is merely the first step in a gruelling process. True research involves designing a question that can be tested, selecting an appropriate methodology, navigating ethical reviews, collecting and analysing data, and subjecting every assumption to peer scrutiny. Academia’s barriers – paywalls, jargon and complex statistics – reflect this specialised, rigorous work. I believe a public health campaign would be helpful to increase data literacy.
To navigate the research landscape, people must first understand the hierarchy of evidence. Not all information is created equal. At the top are systematic reviews and meta-analyses, which synthesise all available randomised controlled trials (RCTs) on a topic, offering the highest certainty. RCTs are considered the gold standard for intervention studies, and they come next. Below that sit cohort studies, which track groups over time; then case-control studies, which compare people with and without a disease; then case series and case reports, which describe individual patient cases. At the very bottom are expert opinion and anecdotal evidence – the personal testimonies and “I know someone who …” stories that, while powerful, prove nothing about general efficacy or safety. A viral Instagram reel is anecdote; a meta-analysis of 50 RCTs is evidence. Confusing the two is a critical error.
So how can you become a smarter consumer of health information, rather than a casualty of it? When you encounter a claim or a miracle cure that sounds too good to be true, pause and interrogate the source with these questions.
What is the study design? Is it a controlled trial or a single-case report? Locate it on the evidence hierarchy.
Who was studied? Did the research include people like yourself in age, gender, health status or ethnicity? A study on 20-year-old athletes may not apply to a 60-year-old with a chronic condition.
Who is behind it? Check the funding source and author affiliations. Is it published in a reputable, peer-reviewed journal? Be warned: the peer-review system itself is under assault from AI-generated “slop papers” – fake studies churned out to pad academic CVs – making vigilance even more paramount.
What are the numbers? How many participants were involved? Are the results statistically significant and do the authors openly discuss the study’s limitations?
What is the consensus? Is this a lone finding or does it align with the broader body of evidence? What do other independent experts in the field say?
This critical lens is your best defence. It allows you to distinguish between a robust clinical guideline and a compellingly packaged story.
When to Step Back and Ask an Expert
The most vital tip is to turn to the experts you are trying to emulate. Your research should be a prelude to a conversation, not a replacement for one. Being curious online and coming to a conclusion is not necessarily testing your convictions. A qualified professional is trained in the process of questioning, weighing conflicting evidence and applying population-level data to your unique, individual context. This is absolutely not to say experts are never wrong and science is infallible. Science is evolving because it is being tested and retested and knowledge is being built upon.
Systemic issues complicate the picture. Access to healthcare remains a significant barrier: patients’ experiences of community mental health services in England continue to deteriorate, with increasing numbers reporting difficulty accessing services. A third of respondents in one survey reported waiting more than three months between their initial assessment and their first treatment appointment, a delay that can lead to a deterioration in mental health. Underfunding, lack of resources, stigma and discrimination – particularly affecting ethnic and racial minorities – all prevent people from seeking professional help. Medical misogyny, racism and classism still exist in spades, and these too must be attended to immediately to restore public faith in institutions and experts.
In our collective quest for agency over our health, we must not mistake information for understanding, or confidence for competence. An important act of self-care in the digital age may not be finding the answer yourself but developing the wisdom to know who to ask.
