In this context, words like “proactiveness”, “personalization”, “persona” can be perceived as a little extra, like the cherry on top you might want to work on if you were really trying to bring your bot to the next level.
So, why bother investing time and effort in these, when the whole point of a conversational AI interface is to automate and make things efficient?
In reality, proactiveness, personalization and, most importantly, bot persona can very often be the keys to unlocking the level of efficiency and automation that companies dream of.
This might sound crazy, but, come to think of it, the only way for a chatbot to deliver on these promises is if it’s actually being used. That’s only possible if:
- The customer can easily find the chatbot, when they need it
- The interaction with the chatbot feels natural and logical
- The chatbot inspires trust
Without these basic premises, users will just wound up getting confused, frustrated, and wanting to be transferred to a human agent asap.
Naturally, there are a lot of things at play to achieve this, but, we’d argue, a cohesive and appealing bot that knows its users well enough to curate their journey and anticipate their needs can heavily contribute to setting these premises.
In this article, we’ll be digging into the 5 most popular misconceptions around bot persona, personalization and proactiveness, in relation to conversational AI. Once these myths have been gently busted and the air has been cleared, we’ll discuss our Aristotelian approach to the subject.
Myth #1: A Bot Doesn’t Need a Persona
“We’re just implementing a small chatbot, with very few, short dialogues. Why bother taking the time to think about its persona? Can’t it just use generic statements, like: Hi, how can I help?, Sure, let’s look into that, and Goodbye, thank you for contacting us ?”
This line of reasoning is understandable (and, perhaps, not as uncommon as conversational AI professionals might like), but not completely unproblematic.
As generic as your messages might be and as restricted as your use cases are kept, the user will naturally try to construe an image of what the entity they’re interacting with is like. Whether you design it beforehand or not, some kind of persona will shine through your bot.
The risk with not taking the time to do it in advance, as we and most other CAI professionals do, is that the customers will just end up perceiving the assistant as confusing and/or unreliable. That’s exactly the kind of bot we all have negative associations with and do not wish to engage with.
Not to mention that, even if you’re just starting out with something small, you will likely end up expanding use cases, or reviewing the texts of the initial ones. The lack of guidelines and alignment on how the bot should sound will, at that point, become even more puzzling.
Myth #2: Proactive and Personalized Bots Feel Uncanny and Creepy
When words like “proactive” and “personalized” come up in association with AI, the mind immediately goes to sci-fi and to creepy assistants that know you better than you know yourself. Also, we’ve all seen how the digital clone in Black Mirror’s “White Christmas” was getting along and that doesn’t seem like something we’d want to replicate anytime soon.
Luckily, that’s not at all what conversation designers are usually going for. Going back to the basic premises for making a bot attractive, making the assistant sound natural is one of the top priorities for designers.
In a very simplistic way, you could say that the way to achieve this is to make the bot act in a human-like way, without giving the suggestion that they’re actually humans.
Achieving this balance is a lot easier said than done, but a way to go in this direction is to focus on some of the typical behavior that humans naturally adopt, entering in interaction with someone else.
For example, we know that neurotypical (and some neurodivergent) people are never passive in interactions. Entering in a communication setting with someone will automatically activate them and set them off in inferring meaning from verbal and nonverbal cues that the interlocutor is giving.
Body language, tone of voice, questions meant as statements, statements meant as questions: all this is taken into account by the listener and, usually, acted upon in their reply.
Think about someone telling a customer service agent: “I could try to figure this out on my own.” The agent would likely infer that the customer doesn’t really feel sure about this and proactively offer to help with the task at hand.
Similarly, humans often have a subconscious tendency to mirror the style of the person they’re interacting with, if they like them. Humans do it both behaviorally, e.g. crossing their legs if their counterpart has been sitting that way, and in the style of conversation, e.g. adapting to the other’s jargon.
These subtle adaptations are barely noticeable, but they all feed into the perception that one will be leaving that interaction with. Similarly, a chatbot, or a voicebot, that’s able to pick up on subtle style cues, or adapt to the way customers talk about company products and features, can leave a much sweeter taste in the mouth.
Myth #3: LLMs Will Be Able to Auto-adapt to the Users, Making Conversation Design Unnecessary
Long be it from us to stand in the way of progress. When LLMs and LLM-based interfaces will be able to automatically manage themselves, adapt to the person in front of them, and, generally, be able to replicate human reasoning, we’ll be skeptically excited to see it.
Luckily or not, it looks like there is still some time before we get there. For now, things like bot persona design hold an important place in the construction of conversational interfaces.
Perhaps surprisingly, bot persona has actually become more sensitive and crucial, as LLMs have been improving and companies have started to use them more frequently in customer-facing applications. There are some really practical reasons for that:
- The most essential reason being that LLMs need guidance and prompts to work. Especially if they’re not domain-specific models, designers will need to give them instructions on the steps they need to follow, or what type of information the bot needs to acquire from the user to perform a certain task, and so on.
- The LLM-induced chatbot renaissance has prompted people to (re)discover chatbots. While this is great news for furthering conversational AI awareness and acceptance, it also means that the expectations have grown and there will be more and more effort required to curate experiences that don’t fall short of them.
- AI content tends to feel and sound all the same. This is more and more true for marketing content, generated emails, but also for chatbot responses. Bot persona is an opportunity to make sure you're pushing branding over blanding, as the visual designers would say.
Myth #4: The Ideal Bot for Most People is Efficient, Not Engaging, Fun, Personalized
Ok, this might not be a myth entirely. In fact, were you to ask us what we’d want the interaction with our bank, insurance, or grocery store to be like, we’d agree that efficiency is the primary sign of a good experience.
Still, there are some caveats and reasons why we’d argue there is some mythicality in this statement.
For example, a lot of times we just want to carry out a simple errand, like going to the store to buy a necessity, or moment of consumption, like drinking a juice, and all we’re looking for is practicality and efficiency. Even in those moments, we’d still appreciate an unexpected touch of humanity.
This could be the moment you find a funny message at the bottom of a juice pack that makes you grin a little, or a cashier at the supermarket who gratuitously compliments you on your shirt, but also an unexpected glimpse of irony, or fun fact, coming from a bot.
On the other hand, there are also situations where, in theory, we are convinced we want efficiency, but, when it comes to carrying out the task, we could do with a little something more.
A great example of this, in the world of chat products, is that of the Cleo app. A lot of people don’t like to think about money and, if you’d ask them, they’d probably tell you they’d rather keep experiences and time related to budgeting to a minimum.
Cleo’s popularity with over 5 million users proves that, in fact, an entertaining, jargon-free, sassy chat interface can be just the thing finance-averse users need, to help them make better decisions.
Most importantly, an incoherent, outdated persona can give a bad first impression to the user. The first moments the bots engage with the customer are notoriously difficult and establish whether they’re going to trust it, or just switch to another channel.
An engaging, well-placed, distinctive persona gives you the opportunity to leverage halo effect in your favor, giving reassurance that the interaction will be all-round positive and effective.
Myth #5: All customer service personas should be conformist and forbearing
Chatbots and voicebots’s tone of voice has historically been aligned with the customer support approach. They’re excited to help, always optimistic and reassuring, they’ll take the blame if something goes wrong and will (try to) make every situation sound positive.
This can create an effect that recalls the previously mentioned visual design paradox of Blanding Branding. The same way all logos and visual identity are looking the same, creating what is referred to as “blandscape”, with all bots starting to sound the same, all messages and interactions tend to feel flat.
That means users, in a sort of blandness blindness, might not really be reading or listening to everything your assistant is saying, which could easily create frustrations and misunderstandings, negatively impacting customer experience.
Talking with Esha Metiary, Senior Conversation Designer, she shared her take on what the ideal bot is like:
“Nowadays, conversation design is all about customer care and pleasing the customer, but, personally, I'm a very big fan of companies that are a little rebellious and not afraid to lose customers because of the views that they stand for.
Adidas is such a company and I think Ben and Jerry's will be such a company. They have a very strong opinion and they're very sure of themselves. They have opinions and they are not afraid to voice them.
Their bots should mirror this image: a little rebellious, playful, not meek. I think chatbots can be helpful, but they can also push back on customers who get really annoying.”
Finding the Golden Mean of Proactivity and Personalization
Now that the air has been cleared, it’s time to consider some practical indications of how to concretely apply the concepts of proactivity and personalization in conversation design. In this article, we won’t dive deeper into bot persona, as it’s a topic we’ve already dealt with in more detail in our Road to Conversation Design report and our article about our 6-step bot building process.
Immediately, when the term “proactivity” comes up in a conversation design context, people might start thinking about unwanted notifications and cumbersome welcome messages popping up as soon as you land on the homepage of a website.
The same way that, when you bring up the idea of personalization, our mind immediately goes to recommendations trying to cross-sell you something, or ads that creepily anticipate our needs.
These are, of course, applications of these concepts, but ones that are on the very far end of the spectrums of proactivity and personalization, which can wound up wielding more harm than good.
A more successful approach to proactiveness and personalization is one that finds the desirable middle between the extreme being totally bland and generic and the opposite, i.e. creepy hyper-personalization and intruding hyper-proactiveness. An approach that falls perfectly in the philosophy of Aristotle’s Golden Mean.
Some examples of a Golden Mean approach to personalization and proactiveness in conversational AI could be:
- Making the chatbot on an e-commerce website pop up, not right at the beginning, but when there are signs that the customer might need help. For example, if there’s a coffee machine in the cart and the user is now searching for capsules, it could be a good moment to come in with a proactive message suggesting which are capsules that work with the coffee machine they’re planning to buy.
- Sending a push notification if there’s an important change, or something that needs to be communicated, but being mindful about each user’s timezone and making the message relevant to them. Generic is fine, personalized makes it shine.
- Skipping questions that you can already derive from data you have (e.g., if the user only has one credit card linked to their bank account, it would feel highly impersonal if they were asked about whether their inquiry is about their debit or credit card)
- Telling a logged-in user all information about their ongoing order, as soon as they land on the chatbot.
As these examples showcase, there can be different levels of complexity you can pursue in finding your Golden Mean. Generally speaking, proactivity and personalization go hand in hand and can already be incredibly successful when applied in small doses. Anticipating even a little thing can save your user a little bit of time and make them feel that bit more cared for.
Our recommendation, aside from the Golden Mean, is to focus on the use case, channel, your own users's profiles and preferences. Confidently knowing these things will allow you to pick what are the small things, specific to that context, that will make the biggest difference.
Inspired to explore what personalized and proactive conversational AI can do for your company? We can help.
Feel free to spill out all your conversational needs and ideas via voice message or good old email.
And if you’re still hungry for knowledge, follow us on LinkedIn for weekly updates on the world of conversational AI, or check out our low-down on the best AI agent builder, comparing Voiceflow and Botpress.