In the early 2000s, I found myself slowly stepping into a career in IT Audit — working on what were then called Trust Services. WebTrust. SysTrust. Engagements focused on identifying controls, testing them, and reporting on their adequacy and effectiveness.
My supervisor and I were so immersed in these projects that we sometimes joked about applying the same logic to real life. Trust Services for boyfriends and girlfriends. Defining a certification practice statement. Designing illustrative controls. Looking back, it was playful — but also deeply revealing. Even then, we were trying to make sense of a simple question: what does it really take to trust?
Trust as foundation
For someone who has spent nearly 15 years in the risk and controls domain, within a professional services firm whose brand is fundamentally built on trust, this word has never been abstract. Trust is not a slogan. It is infrastructure. It is earned, tested, broken, rebuilt — or lost when systems outpace accountability.
Yet it was during the pandemic that I began writing about trust more consciously. As uncertainty became global and deeply personal, my reflections increasingly circled around two intertwined questions: the future of humanity, and the conditions under which trust can survive and evolve. For a while now, same questions echo in my head. Trust, now feels less like a new topic, and more like a full circle.
Recently, I’ve been invited by the female factor to an invite‑only Leaders’ Dinner at UniCredit’s Milan headquarters, to explore on the AI trust gap — to exchange perspectives on where trust breaks down, how leadership behavior shapes adoption, and what it takes to move from AI rollout to confident use.
From systems to sense‑making
When we talk about AI in organizations, we often start with the technology: models, data, regulations. But over the past months, in writing and learning about AI, I’ve seen over and over again that the real challenge isn’t the technology — it’s trust.
In many ways, AI has moved faster than our ability to make sense of it.
Research shows trust in more autonomous AI systems has dropped dramatically, as people become uneasy about how AI shapes their work and decisions. At the same time, leadership often hesitates to trust people to use these tools wisely.
So we face a double trust gap: people’s trust in technology, and leadership’s trust in people using it.
Beyond technocentrism
In my own reflections on AI, I’ve been struck by how our narratives shape trust. We often fall into what I called a technocentric mindset – believing technology will ‘fix’ the problems created by humans.
But technocentrism can blind us to structural issues: culture, inequality, power and accountability. Trust erodes when people feel that AI is just scaling existing pressures, opacity or injustice.
A more promising perspective – closer to posthumanist thinking – is to see AI as one element within a broader ethical framework: a tool that should support human dignity, diversity, and the flourishing of whole ecosystems, not just efficiency or control. When we frame AI this way, we create space for trust, because technology is clearly in service of shared values, not the other way around.
When trust is unsettled
Yet today, as news from the Middle East continues to unfold, I find myself sitting with an uncomfortable dissonance.
We speak about trust in systems, institutions, and technologies — but trust is also shaped by what we witness in the world around us.
When geopolitical tensions escalate and human lives are once again placed at risk, it becomes harder to discuss trust as an abstract organizational challenge, neatly contained within innovation agendas or leadership dinners.
This doesn’t invalidate the conversation about AI and trust. If anything, it deepens it. But it does change how and when I want to engage.
For now, I’ve decided to step back from travel and pause my participation in this particular event. Instead, I will continue writing — slowly, thoughtfully — to make sense of the connections between technology, leadership, power, and the fragile conditions under which trust can exist at all.
Trust, I’ve learned over the years, is not something we can compartmentalize. It lives simultaneously in our systems, our institutions, and our shared humanity. And when one of those layers feels deeply unsettled, it’s worth listening to that signal.
I’ll return to the conversation — with more questions than answers — in the weeks ahead.
Published on Linkedin.
