Measuring trust
Trust shifts constantly, influenced by everything from how tired we are to our past experiences with technology. To measure trust in technology, it is useful to understand it through its dimensions: cognitive trust (does it work reliably?), emotional trust (can I connect with it emotionally?), and systemic trust (do I trust the institutions behind it?). When designing AI experiences, we need to understand all three. A good approach could be to start with lightweight quantitative research to spot patterns, then dive deeper with qualitative methods to understand user behaviour. Small experiments with 50-100 participants can reveal significant insights without breaking budgets or timelines. The aim is to gather enough insight to guide design decisions.
‘Yes, but we can't ask to our customers if they trust AI, we are providing the service, so it's like asking if they are trusting us’. Here, I am in a room with few people, including brand strategists and conversational designers, to talk about how to address the elephant in the room: trust in AI. We are implementing GenAI into product, and we want to understand if our customers want to use what we think is good for them, and if they trust it, since previous research indicated that they don’t. The conversation is heated, and I imagine that many of us are experiencing a similar tension these days.
I also don’t remember any brand that doesn’t have trust in their core values and narrative.
Trust is indeed a multi - dimensional psychological phenomenon that is not easy to pin down, because it changes minute by minute. It is influenced by personal experience, bodily sensations, and things like the level of sleep, cognitive load and stress. It also has a social dimension, so trust formation is influenced by how you interact with others and your social identity.
Once trust il lost, it is difficult to regain it.
When it comes to mapping out trust towards a product, or a company, or a service, I would consider these three different types of trust:
Cognitive trust.
It is the rational bit. We assess whether someone (or something) is reliable, competent, predictable. Think of how we evaluate a new tool based on its features and track record. In a brand, cognitive trust is the one related to the functional values of the brand.
Emotional trust.
It's about believing that someone or something has our best interests at heart. This is harder to achieve with technology, since machines don't have intentions or care. In branding I would say this type of trust is associated with the emotional values of the brand.
Systemic trust.
This is about trusting institutions, processes, regulations. It is the part of trust that is directed towards a group of people.
In our head, these distinctions don’t exist, but when it comes to measuring trust and map it, they become helpful.
Going back to GenAI, it is interesting to observe how these different layers interact with each other. A person might have high cognitive trust in AI's diagnostic accuracy but low emotional trust because it feels cold. Or they might trust the technology but not the system that implements it.
When we design AI experiences, we're designing across all three trust dimensions. We can't just focus on making AI work better. We need to think about how people feel and what larger systems they're navigating.
To measure trust, different things can be done.
Leverage quantitative research first.
A light quantitative study can help to understand first where the numbers and statistical analysis take us, giving us pointers. This can be done with a relatively small budget, and it can create a light structure for further qualitative research. To give some nuances, the experiment can use existing methodologies to measure trust in a larger group of people such as validated trust scales (like Mayer and Davis's trust inventory), cognitive load assessments during AI interactions, or behavioural measures like willingness to follow AI recommendations. These could provide statistical pointers keeping costs manageable, and they create a solid foundation for designing targeted qualitative followups.
A psychological experiment can give some guidance on the whys of the behaviour, without pushing the budget too much.
Identify patterns and expand with qualitative research.
Once patterns have been identified with statistical analysis, qualitative research helps to unpack behaviours and gives a better understanding of the context. Using mixed methods can give a better understanding of the whys. Often in-depth interviews or observation are very helpful, particularly for exploring the emotional and systemic trust dimensions that numbers alone can't capture. Some elements of qualitative research could be included in the initial psychological experiment, using for example open-ended survey questions or post-task reflection prompts that can guide your later interview design.
Thematic analysis, triangulation and actions.
If you can't triangulate with other researchers or designers because you are running the research solo, I would always recommend being transparent about this limitation and build in alternative validation methods. Document your analytical process clearly, seek feedback from participants on your interpretations, and cross-reference your findings with existing literature or data sources within your organisation. Peer review to manage biases can be helpful too.
insights. If you’re running the research solo and can’t triangulate with other colleagues, I would be transparent about this limitation and I would try to build in alternative forms of validation. I would combine different types of data, such as behavioural logs, interviews, and open-ended survey responses, and I would then analyse the data through different lens, combining cognitive and emotional models of trust. When feasible, I would seek participant feedback and cross reference themes with existing literature or internal organisational data. Finally, I would ask a peer to review the analysis to help surface potential blind spots and reduce interpretative bias.
Using these tools to measure trust can help to design better AI systems, providing insights in a relatively short amount of time, without impacting budget and roadmaps too much. Small scale experiments with 50 -100 participants can reveal interesting patterns in trust formation at a fraction of the cost of extensive qualitative studies.
This approach could create a lean research model where quantitative findings act as signposts for deeper qualitative exploration. This could be especially useful in early stages product or service development, where access to users is limited. It is not intended to replace appropriate research when needed.
Rather than conducting qualitative studies upfront, you can run focused experiments to identify which trust dimensions need attention, then target your qualitative efforts accordingly. This method provides ongoing feedback loops that align with product development cycles, allowing teams to test trust interventions iteratively rather than waiting for comprehensive research programmes to conclude.
Today, I would use these tools to gather some grounded insights to avoid designing in a complete vacuum, or make strategic decisions based on the NPS only. These methods can provide enough context to make more informed decisions while acknowledging their limitations.
Tomorrow, the context might change. Some suggest that we are designing for the last generation of human to interface products now. So maybe we are headed towards a world where metrics like NPS will become obsolete.
Photo by Michael Myers.
Parts of this manuscript were drafted with the assistance of AI language models (specifically, Claude 3.7, ChatGPT 4.0, Google Gemini 2.0). The author used AI as a tool to enhance clarity and organisation of ideas, generate initial drafts of certain sections, and assist with language refinement. All AI-generated content was reviewed, edited, and verified by the author. The author takes full responsibility for the content, arguments, analyses, and conclusions presented. This disclosure is made in the interest of transparency regarding emerging research practices.