For marketers, perhaps the most exciting application for emerging technologies around generative AI and large language models is increased personalisation.
Silvio Palumbo, a BCG partner who runs the firms’ personalisation efforts in North America, sees two primary benefits: automation and better recommendations. He describes the latter as “scaling the marketers’ intuition”.
Unfortunately, the technology is not yet available to upload someone’s entire brain and decision-making process into the cloud. Instead, algorithms approximate human reasoning through massive data sets, studying the cause-and-effect relationship in thousands of examples. These are then used to make new decisions and create additional material.
In practice this should mean better suggestions that more closely relate to our interests and desires. Netflix has long used AI to analyse viewing habits and make personalised show recommendations. Clothing companies adjust what outfits they show based on users’ previous orders and other interests.
But the algorithms can contain systemic skew in the data used to train these intelligent systems. AI algorithms are mirrors reflecting the data we feed them; if this data is incomplete, unrepresentative, or prejudiced – so will the outcomes.
AI systems then generate prejudiced results due to inaccuracies in the input data.
The Washington Post showed how AI image generators like Stable Diffusion and DALL-E show this through experiments. When asked to identify “attractive people,” the models frequently showed someone young and light-skinned. A “person at social services” is a bedraggled minority woman with messy hair while a “productive person” is white, male and bespectacled.
Fixing or removing bias is complex. Google withdrew its Gemini image generation service after it was literally impossible to generate white people in certain European historic events.
Echo chambers are another risk. This can happen on social media: users only see content based on what they’ve previously liked. There’s no discovery, no investigation of something new. Digital bubbles constructed by personalisation algorithms can isolate individuals from diverse perspectives and challenging information. This homogenised information diet is likely to create new problems.
Take the case of women-owned businesses. Historical patterns indicate that these businesses receive less funding than their male counterparts. An AI system, trained on this data, could perpetuate this disparity by de-prioritising women-owned businesses in funding recommendations or communications about investment opportunities and business growth. Personalisation driven by historical data furthers injustice.
Addressing the bias in personalisation requires a multifaceted approach. It’s crucial that AI systems be designed with an understanding that they can either reinforce or counteract existing societal biases. To prevent the former, it’s essential to diversify training datasets and create algorithms that can do more than blindly emulation.
Marketing teams need to get involved in creating data sets to ensure companies use well-rounded, up-to-date, and non-biased data sets in campaigns. Yes, this is traditionally the province of IT, tech and product teams. But control over input helps make content more reliable. “Garbage in” equals “garbage out”. Human oversight can ensure ethical considerations and societal values are not overshadowed by efficiency and the rest of the market.
Customers will get ever savvier about how companies create the marketing they see. Already they have demanded more understanding on how data is collected and processed, shaping their digital experience.
Countries are considering giving consumers the right to remain anonymous, opting out of personalisation. If companies’ solicitations seem irrelevant or overly intrusive, customers can turn them off. The possibilities to create content through AI are limitless – but only if marketers take a strong interest in the underlying data producing results.
Zoe Forbes-Pyfrom is a marketing and PR account manager in London