Why inclusive AI matters for the creative industry

A drawing of a plane flying on the outside of Earth
Published
09 October 2024

What are the questions we should be asking ourselves as AI becomes a bigger part of our day-to-day creative lives? Suhair Khan explores ideas like ‘Emotions and Design’ and ‘Inclusive AI’ on her Substack The Future of Intelligence. Khan is a technologist, design activist and thought leader in culture and innovation. She is the founder of open-ended design, a platform and incubator for ideas and projects at the intersection of technology and creativity. In over a decade at Google and Google Arts & Culture, Khan led initiatives which merged cutting edge technologies with arts, design, culture, education and environmental sustainability. She sits on the board of trustees for the Design Museum, Sadler’s Wells, Studio Wayne McGregor and the advisory committee of the British Library; and is a lecturer in the Master of Architecture program at Central Saint Martins.

Here, Khan unpacks some of the ethical questions around not just what AI can do, but what it should do, through the lens of inclusivity and impact at D&AD Awards 2024.

As the Hollywood Writers’ Strike showed us last summer, the creative sector is going to be at the first port of call for game-changing disruption (and opportunity) as a result of AI.

As the debate over IP ownership and creativity sees little sign of abating soon, the creative futures of AI will present us with legalistic, social and philosophical challenges that will not be solved with a singular act of legislation, research paper or commentary.

The McKinsey Global Institute has suggested that generative AI could create value equivalent to $2.6 trillion to $4.4 trillion in global corporate profits annually. Sectors like banking, the creative industries, law, and even manufacturing will see step changes in human efficiency and output.

A person looking at a wristwatch. it has a neon green symbol on it, and 'impulse' is written across the image.

Samsung Impulse, Cheil Spain

Not everyone will benefit. Diversity in AI is impacted by 1) data, by 2) training, which is influenced by biases. Activists discuss the ethical challenges posed by AI, like inflaming bias or racism. AI will yield large productivity gains, resulting in massive capital gains for some; many would lose employment while few accumulate wealth never seen before.

“We risk constructing machine intelligence that mirrors a narrow and privileged vision of society, with its old, familiar biases and stereotypes,” says Kate Crawford, AI Now Research Institute.

Can humans save us from biassed AI? Are humans using artificial intelligence better at making unbiased choices? Is it enough to have humans oversee AI decisions to ensure fair and balanced decisions are being made? If yes, how? What does it take for systems to be trustworthy and fair?

“Inclusive AI is the practice of designing and developing artificial intelligence systems that respect and represent the diversity of human values, cultures, identities, and abilities.”

There is certainly an increasing awareness that AI needs to be kinder, safer, more inclusive, multi-disciplinary, and intersectional. It needs to be full-stack and to span the range of coders and designers to those who sit on the board of technology corporations.

Across the realms of policy and investment, individuals are confronting how to frame regulation through so-called universal shared value-systems. Technology companies have loaded up on ethics boards and advisors. And in venture capital there are now many Responsible AI venture coalitions and initiatives.

Inclusive AI has many definitions, but I like this crowdsourced one from a long thread on Linkedin: Inclusive AI is the practice of designing and developing artificial intelligence systems that respect and represent the diversity of human values, cultures, identities, and abilities.

They speak about human- centeredness, a term borrowed from design, or they debate the fiduciary duties of corporate leadership to shareholders. AI, they argue, ought to be “trustworthy, responsible, fair, friendly, ethical, good, better, open, transparent.”

If both regulation and Venture Funding do not become more ethics-driven or at the very least nuanced in its approach to Context & Attribution in AI data and AI systems, neither will the tech sector. What does this mean? Quite simply, a clear understanding of where data comes from, who trains it, and an understanding of the landscape in which it is framed.

Inclusive AI is a process, not an outcome. It requires a constant dialogue with Purpose, Landscape and Intention. It requires engagement with diverse perspectives, e.g., age, gender, race and ability, across all touchpoints within the AI ecosystem, from design, development, and deployment, to stakeholders. Not to mention the fact that we live with the reality of planetary entanglement; humans cannot be the only stakeholder.

“To be most effective and most revolutionary, AI does not have to be earth shattering.”

What are examples of Inclusive AI? To be most effective and most revolutionary, AI does not have to be earth shattering. There are examples of many applications of AI in health and climate which are worth highlighting and even celebrating!

Samsung Impulse, which won wood pencils in Health & Wellbeing and Digital Design at D&AD Awards 2024, is an Artificial Intelligence app for Galaxy Watch that helps around 100M people worldwide with speech disorders and stuttering. Through an algorithm based on Natural Language Processing, it analyses and translates words into rhythmic vibrations allowing users to have an invisible and inaudible assistant on their wrist, which helps to synchronise the brain with the speech. This multifunction app helps users in many situations through Voice assistance, AI speech coach and Rhythm & Tone exercises, all of them based on a subconscious tempo that activates neural impulses of language.

“The responsibility when an idea like impulse appears in the agency's brainstorming is to be as strict as possible with the implementation,” said Cheil Spain Executive Creative Director Alejandro Di Trolio, adding, I don't believe in prototypes, I believe in solutions that are ready to change a reality. The greatest power of these types of ideas is that they are verifiable and scalable… In a world that is becoming so dehumanised, where we seem to be rapidly losing empathy for others, creating an idea that not only generates business but also serves to change the realities is a win-win within our industry.”

a blue sky, with a small plane flying across it, leaving behind a trail.

Contrails – Making Flying More Sustainable with Google AI, Google

Contrails — the thin, white lines you sometimes see behind aeroplanes — have a surprisingly large impact on our climate. The 2022 IPCC report noted that clouds created by contrails account for roughly 35% of aviation's global warming impact, over half the impact of the world’s jet fuel. Google Research teamed up with American Airlines and Breakthrough Energy to bring together huge amounts of data — like satellite imagery, weather and flight path data — and used AI to develop contrail forecast maps to test if pilots can choose routes that avoid creating contrails. The work picked up a D&AD Future Impact Pencil for Design at the 2024 Awards.

“AI will augment and foster new forms of human creativity; it will extend the mind and body into new directions in the creative sector and beyond. We can build for more inclusive futures.”

This is why we should be excited about possibilities in AI. We are in a beautiful new era where algorithms can create new universalities for making, designing, and building. AI will augment and foster new forms of human creativity; it will extend the mind and body into new directions in the creative sector and beyond. We can build for more inclusive futures. As part of the EU’s AI 2025 policy act, I was part of a group of experts from diverse backgrounds, including AI, ethics, design, art, science, philosophy, and sociology. In sharing our perspectives on the topic, we came to the conclusion that there is a need for a dynamic approach to fairness in AI and the importance of a multidisciplinary approach in addressing discriminatory outcomes.

Exploring the implications of human oversight for fairness and discrimination in AI-supported human decision-making forms just the beginning of this exploration. Artists, creative coders, AI researchers & philosophers have been interrogating this space for a long time, and given the impact of AI on the creative sector, their work matters now more than ever.

In its power to create both divisions and new universalisms, it really does matter where AI can lead us. Maybe AI is still not always very “good” at what it does. But it certainly has potential to be used for good.

Published
09 October 2024