Does Responsible AI Have an Identity Problem?
Would you drive a car without seatbelts? Would you let your children?
For much of the twentieth century, the answer would have been an obvious yes. Seat belts weren’t generally available, much less standard. So, absent an alternative, drivers assumed that brakes were enough. It wasn’t until the 1960s, with mounting crash data, public awareness campaigns, and state laws, that perceptions changed. Brakes, after all, won’t save you from the accident you don’t see coming.
Thanks for reading Michael Quinn! Subscribe for free to receive new posts and support my work.
But despite overwhelming evidence that Responsible AI practices provide a high standard of safety, most brands still cite legal and ethical risks as barriers to AI adoption. In 2025, business leaders will pour resources into AI tools and systems while sidestepping the accountability frameworks that would make them safer. Why? Because, like seat belts in the 60’s, Responsible AI suffers from an identity problem.
The Volvo Parallel
In 1959, Volvo did something radical: it invented the three-point seat belt, then gave the patent away for free. Any automaker could use it. The data was undeniable – seatbelts saved lives and prevented injuries. Yet, the industry hesitated. Consumers weren’t demanding seat belts. And brands worried that marketing them as a benefit would imply their cars were dangerous.
Cost, complexity, and inertia won out. Adoption was sluggish. Public awareness, government mandates, and shifting consumer expectations eventually forced a change. Today, a seat belt is non-negotiable. We don’t think twice about buckling up.
Organizations in brand-new Customer-Facing AI are just starting to accelerate.
The language of AI safety is still in flux—brakes, guardrails, kill switches—but seat belts may be the better analogy. A seat belt is what protects the customer (and so, the brand) after an automated AI has produced a sub-optimal or harmful experience.
The Business Case for Safety
AI-powered chatbots, autonomous agents, and recommendation engines are increasingly woven into daily life. Yet Responsible AI processes —whether in the form of transparency, human oversight, or clear ethical guidelines— often remain an afterthought, a box to check, not a through line across the entire innovation process. Leaders hesitate, citing costs, time, complexity as reasons to avoid them.
But that calculus may soon change.
"Business teams, tech teams, and data-science teams have to iterate for months to craft exactly how humans and AI can best work together," says Sylvain Duranton, Global Leader at BCG X. "The process is long, costly, and difficult. But the reward is huge."
And it’s not just about compliance. For brands, Responsible AI is about trust—and trust, once lost, is hard to recover, as Apple is discovering now in the UK. Volvo built an enduring reputation on safety, and consumers responded. In an AI-driven world, customers are already valuing privacy and security, perhaps faster than car buyers valued safety in the 1960s.
Every Industry Needs Its “Volvo of AI”
Innovation is thrilling. The speed, the possibility. But how fast do customers really want to go in their AI-powered experiences without seatbelts? And once people realize the branded-AI they’re using lacks available safeguards, will they avoid it, preferring instead a brand known to champion safe AI-Experiences?
One automotive brand made safety its mission, pioneered the technology, and gave it away to every competitor — because it values everyone, not only its customers:
From the very outset Volvo Cars has been a brand for people who care about the world we live in and the people around us. We have made it our mission to make life easier, better and safer for everyone.
Today with the benefits and perils of emerging AI-powered experiences cheek-by-jowl, customers in every industry deserve their Volvo. A brand that sets the standard for AI safety, not because regulators demand it, but because it is good for everyone.