The ROI of Accountable AI: Outperforming Competition While Mitigating Risk

Late one morning in March 2019, Ethiopian Airlines Flight 302 plunged into a barren field six minutes after takeoff, killing all 157 people aboard. It was the second crash of Boeing's 737 MAX in five months, following a similar tragedy in Indonesia that claimed 189 lives. At the heart of both disasters lay a piece of software that few pilots knew existed—a system that would ultimately reveal the perils of unchecked technological automation in corporate America.

The system in question, known as MCAS (Maneuvering Characteristics Augmentation System), was designed to automatically adjust the aircraft's flight path under certain conditions. Boeing had introduced it as a minor modification to compensate for the MAX's larger, more fuel-efficient engines. But this seemingly routine software update would eventually cost the company $20 billion in market value and lead to the departure of two chief executives.

Thanks for reading Michael Quinn! Subscribe for free to receive new posts and support my work.

Subscribed

For today's corporate leaders racing to implement artificial intelligence in their operations, Boeing's catastrophe offers a sobering lesson in what technologists now call "accountable AI"—a framework for developing and deploying automated systems with appropriate oversight, alignment, transparency, and ethical considerations.

The tragedy wasn't a simple coding error. Investigation reports from both crashes revealed systematic failures in how the technology was developed, managed, and communicated to its users. The parallel to current AI implementation in corporate America is striking: companies are rushing to compete with increasingly sophisticated automated systems that make crucial decisions affecting customers' lives, often without adequate safeguards or transparency.

The European Union's AI Act, set to become law, provides a clear framework for what accountable AI means in practice. High-risk AI systems will require detailed documentation, regular risk assessments, and human oversight. Companies must maintain logs of their AI systems' operations and submit to third-party testing—requirements that would have exposed the shortcomings in Boeing's MCAS implementation long before tragedy struck.

The concept of accountable AI represents a fundamental shift in how companies approach technology. At its core, it demands that artificial intelligence systems be developed and deployed with the same rigorous oversight that we expect in other critical business operations.

This framework encompasses several key principles that Boeing's experience illuminates with stark clarity. First is the necessity of transparent decision-making that includes ethics: Boeing's choice to minimize MCAS's significance in pilot training materials mirrors the current tendency of companies to downplay the role of AI in their operations. Second is the crucial importance of stakeholder engagement: just as pilots should have been integral to MCAS's development, employees and customers must be involved in shaping AI systems that affect them.

Perhaps most critically, accountable AI demands continuous monitoring and the ability to intervene when systems behave unexpectedly. In Boeing's case, pilots found themselves wrestling with an AI system they barely knew existed. Today's corporate leaders face a similar challenge: ensuring their organizations build and maintain structures of human oversight of the AI systems that are increasingly complex and essential to compete.

While comprehensive AI regulations have yet to appear in the United States, the National Institute of Standards and Technology (NIST) has published an AI Risk Management Framework, providing voluntary guidelines for responsible AI development. These guidelines emphasize the importance of regular testing, documentation, and human oversight—precisely the elements that were missing in Boeing's MCAS implementation.

The path to accountable AI requires significant investment—in technology, in processes, and in people. It demands that top management elevate AI governance to the same level as financial oversight or quality control. But as Boeing's experience demonstrates, the cost of not doing so can be far greater.

For CEOs watching the AI revolution unfold, take comfort in knowing that many large enterprises have already taken the step of investing in Accountable and Responsible AI, like Walmart, Sony, and Microsoft.

In an era where artificial intelligence increasingly drives critical business decisions, the responsibility for initiating robust accountability measures begins in the corner office. The lessons from Boeing's MCAS system stand as a stark reminder that in the rush to market with AI, accountability cannot be an afterthought.

Previous
Previous

Paris: Notes from the International Association For Safe & Ethical AI Conference: Day1 Morning Session

Next
Next

Meta’s New Direction Is A Risk To The Brand and Customer Experience