Does Your Brand's Style Guide Need A Chapter on Accountable AI?
LAI-enhanced brand communications can reach millions in seconds, yet few organizations can communicate a clear governance framework for this powerful technology.
80% of multinational brand owners have concerns about how creative and media partners are using generative AI on their behalf, and legal (66%), ethical (51%) and reputation (49%) risks were also cited as major roadblocks to more widespread adoption.
Currently, just 36% of companies have introduced terms prescribing how their partners can use generative AI on their behalf . World Federation of Advertisers Sept. 17, 2024
The traditional brand style guide has long served as the touchstone for maintaining consistency across marketing channels. But as brands increasingly deploy AI-powered customer experiences – customer service chatbots, original content, and personalized marketing – we face a new frontier of brand representation that exceeds the utility of conventional guidelines.
This presents a 3-fold challenge to brand guardians: 1) technologists, who architect these AI systems, often operate outside traditional brand governance, 2) creative teams, though fluent in brand values, may lack an understanding of permissible AI capabilities or their risks, and 3) the company’s vision and process for solving business problems with Gen AI are not well-integrated into familiar communication channels.
The larger solution isn't simply adding an AI checklists to your style guide, of course. It requires rethinking how your brand engages with AI-powered customer experiences, then adding the guidance of risks, processes and representative use-cases. For example, consider these critical intersections:
Data Integrity: Your brand values may champion fairness, but what if your AI systems are trained on biased data? How do you audit and mitigate these risks?
Trust & Privacy: Customer trust, once lost, is expensive to regain. How are you safeguarding personal data across your AI implementations? What governance ensures responsible collection and use?
Regulatory Compliance: While the U.S. takes a sectoral approach to AI regulation, the EU's rights-based framework demands a global scope. Have you mapped your AI initiatives against emerging regulatory requirements?
Intellectual Property: As AI systems increasingly generate content, how do you ensure outputs align with both legal requirements and brand integrity? What processes validate proper licensing and attribution?
Transparency: Should your brand proactively disclose AI use to customers? How might transparent AI practices differentiate your brand in an evolving marketplace for customers and employees?
The scope of your AI guidelines should mirror your organization's AI ambitions. A series of facilitated workshops is a good place to start, to help map where AI can impact your brand, managed, and communicated across your organization of creators.
Especially in the age of GenAI, folks are very concerned with the huge amounts of data being ingested by these models, typically uncurated data. So you have all sorts of challenges in terms of issues of consent, intellectual property, biases within the data, transparency around the collection process, and compensation of individuals involved in the data creation process. There's so many of these issues and it's easy to list out the issues, but it's very hard to say how would you actually try to tackle all of those and address them, and even at a small scale, let alone at a big scale. Alice Xiang, Global Head of AI Ethics at Sony: From The Road to Accountable AI: , Nov 14, 2024
The brands that thrive in the AI era won't necessarily be those with the most advanced technology, but those who deploy it most thoughtfully. Your style guide's AI chapter may be the perfect place to begin that journey.