Patients’ trust and AI adoption in healthcare.
AI adoption in healthcare is often focused on layering the technology on complex existing infrastructures, forgetting that if patients don't trust the technology itself, nobody is gonna use it, and money is gonna be wasted.
This project investigates how trust influences appropriate reliance, and how explainability and depth of information can influence (or not) patients' wellbeing.
Impact. Delivered 2 controlled experiments with 170 participants. Developed statistical analysis skills using mixed ANOVAs and experimental design methodologies. Created a framework to measure trust formation to adopt AI in healthcare services.
My role. UCL Alumni, Master in Cognitive and Decision Sciences 2024-25, behavioural research, experimental design, data gathering, statistical analysis, HCI.
The healthcare AI trust crisis.
AI can detect cancer 29% more effectively than radiologists, with the potential of increasing life saving timely interventions. However, a critical blind spot is costing the industry millions: healthcare organisations are focusing on layering technology onto complex infrastructures, sometimes forgetting that if patients don't trust the technology, adoption fails and investments are wasted.
This experimental study I run at UCL suggests that even when AI systems work perfectly, patients consistently trust identical diagnoses 15 + points less when delivered by AI versus human doctors. This trust gap is a significant business-critical barrier that determines whether your multi-million Pounds AI investment succeeds or becomes expensive shelf ware.
Understanding trust formation.
It is believed that a patient's success rate in following a therapy is directly associated with the trust developed with a physician. However, the introduction of AI systems in healthcare is challenging this dynamic, presenting the opportunity to assist patients when a doctor isn't available, but also facing significant adoption challenges.
In order to understand how AI systems can be integrated into existing health services, I designed 2 psychological experiments to understand how trust is formed towards AI-generated diagnoses, when they are delivered by the AI system itself versus when they are delivered by a human doctor.
Experiment 1
Research Question.
How does the complexity of diagnostic explanations affect patient trust in AI-generated medical diagnoses?
Design.
109 participants (120 initially, then 11 removed from the study) evaluated medical scenarios across cardiac and dermatological conditions with varying severity levels. Participants were randomly assigned to receive either basic explanations (high-level diagnostic information) or extended explanations (comprehensive technical details including medical terminology and diagnostic reasoning).
Key Insight.
Contrary to expectations, detailed explanations actually decreased trust for severe cardiac conditions, suggesting that cognitive overload undermines patient confidence in high-stakes medical decisions.
Experiment 2
Research Question.
How does the timing of diagnostic information presentation influence patient trust and decision-making?
Design.
47 participants (initially 50, 2 removed from the study) experienced identical cardiac diagnostic information presented in three different temporal sequences: diagnosis before treatment recommendations, integrated step-by-step disclosure, or treatment recommendations presented before detailed diagnostic explanations.
Key Insight.
Patients demonstrated consistently higher trust when receiving treatment recommendations before diagnostic explanations, indicating that progressive information disclosure better supports trust formation than front-loading complex medical details.
Opportunities for healthcare services.
Explanation complexity can backfire - Extended explanations decreased trust for severe conditions due to cognitive overload, challenging current assumptions about transparency in AI healthcare systems.
Information timing matters more than detail - Progressive disclosure increased trust scores by 8+ points across all measures, offering a cost-effective intervention that outperformed complex explanation systems.
The human-AI trust gap persists - Patients rated identical diagnoses 15+ points higher when attributed to human doctors versus AI systems, but strategic information design can help bridge this gap.
Cognitive load theory provides a predictive framework - Understanding how patients process information under stress allows us to design AI interfaces that support rather than overwhelm decision-making.