fbpx
BLOG | AI and personas

AI and personas: pros and cons

We have already tried out the fact that ChatGPT can create personas. We also know that this is not a particularly good idea. Nevertheless, AI can actively support us in our work with personas.

Where AI can help us...

AI is much better than us at some things: AI doesn't make careless mistakes, doesn't get tired or distracted and doesn't get headaches. And: AI doesn't get bored either. That's why AI is virtually predestined to take over all those routine tasks that require a high level of concentration and are time-consuming, monotonous and error-prone.

When creating personas, this applies in particular to the preparatory work: analyzing large amounts of data and putting them into a meaningful context is a wonderful AI task. This includes, for example:

  • Customer reviews
  • CRM data (socio-demographics, user behavior, contacts, purchasing activities, analytics), web analytics
  • Social analytics
  • Support data
  • ERP data (transaction data, ...)
  • Search engine data
  • Sentiment analysis, and so on.

Depending on the data input and quality, AI technologies can be used to create customer or applicant segments from such data. To do this, AI tools usually use key technologies such as deep learning and/or neural networks: learning systems that recognize patterns and trends in large amounts of data.

Other methods include predictive analytics: prediction models based on historical data; analysis of customer feedback and clustering algorithms that form customer segments based on similarities in purchasing behavior or other characteristics. Based on these customer segments (or applicant segments), we can then create data-driven personas , which provides us with important information about our customers or ideal candidates, for example:

  • Media usage behavior (advertising touchpoints and customer approach)
  • Price sensitivity/salary expectations
  • Purchasing behavior (and its prediction)
  • Design of the purchasing process (or application process, ...)
  • How does targeting work with as little wastage as possible?
  • What hurdles are there when buying the product in question? Or when applying for jobs?
  • What is the decisive thought or situation that prompted the purchase/application? Or to abandon it?
  • Which path did the buyer or applicant take to get to the product/application?
  • Which criteria were important?

However, this only works if we feed the AI with our own or collected (and purchased) data - and the knowledge of how good the quality of this data is. Prompts for creating "data-based" personas via ChatGPT are circulating all over the web. However, if ChatGPT does not access our own data, this results in personas that have very little to do with reality and are peppered with platitudes.

So even with the help of AI, nothing works in persona creation without actually interviewing people - either in-house or by using representative surveys, commissioning market/opinion institutes or similar.

...and where not

AI needs real data to analyze and classify in order to create personas. When surfing the net, shopping online and commenting on social media, we leave behind vast amounts of such data, consciously and unconsciously. But we still live offline, in a reality full of friends, neighbors, colleagues, family members, cafés, marketplaces, workplaces, hair salons and brick-and-mortar retail. In all these places, "real life" takes place with all these people, with gossip, "Did you know...", "I recently discovered/bought/tried an XX..." and so on.
If the customer journey is not or only partially online, there are often gaps in AI personas. People who obtain information offline from other people simply do not appear in many AI personas because no data is available.

It is even more difficult with B2B transactions: These very often take place without the help of search engines and web stores, but via buyer centers with direct contact to the supplier. What's more, several people are usually involved in the buyer center.

These are scenarios in which AI cannot really help. But there are also situations in which AI not only doesn't help, but actually harms:

AI models generally learn on the basis of training and test data. However, these data sets are never neutral, but are always influenced by social reality and the social assumptions, stereotypes and norms in which they were generated. At its core, AI is always a system that learns and whose performance depends on the underlying development, the database and ongoing maintenance.
So if the reality in the training data is already distorted, the AI model will be too: if the data is full of racist or sexist content, for example, the model will fall back on these prejudices and discrimination. Bias-laden personas and advertising based on them can upset and annoy entire customer groups and distance them from the brand in the long term. Disgruntled customers can trigger a shitstorm on social media, and the resulting damage to the company's image can deter applicants and cause lasting damage to the company as a whole.

To prevent such scenarios from occurring, personas must be data-based, and preferably so neutral and diverse that biases can be ruled out.

 Save as PDF
×