Documenting chatbots and voice assistants can be a tricky job to wrap your head around. With the amount of tools, people, content that an AI assistant touches, it is bound to be difficult to define how to keep track of all its aspects (plus the changes that are made to them).
To help us get a better understanding of AI assistant documentation, we asked Anja de Castro, a freelance conversation designer that has imparted her knowledge and applied her skills to many noteworthy companies, to share her perspective on this.
In this interview, you’ll find:
- How Anja approaches documentation, when she’s working on a project
- What she finds helpful, in terms of documentation, when she’s starting a new collaboration
- Her experiments and thoughts on LLM-related documentations
How would you describe your role and responsibilities?
As a freelance conversation designer, I’m asked by companies to support them in creating their chatbot and to coach the team. Aside from being responsible for the content, I always ensure I have a say in the way the chatbot looks, too.
There’s often not enough focus on the UI, when actually we know that many people will rather not interact with a chatbot if it looks bad. It’s a known fact, related to aesthetic-usability effect.
Because I will be leaving the company eventually, I need to teach them everything I know and make the team independent, so they know how to take over.
Let's imagine you’re starting from scratch on a new chatbot for a company. Where would you start off & what would you be documenting?
First step will be to define the topics that need to be covered by the chatbot. We’d be documenting that, along with the persona that we are designing. Bot persona is very important and there are a number of things you need to include in the style guide, to ensure anyone on the team or newcomers can be informed on the way the bot talks.
Tone of voice, which reflects how we want people to feel when they interact with the bot, and the characteristics of the bot are two of the main ones. So: should the bot feel friendly? Engaging? Knowledgeable? Of course, some of these characteristics are common across many customer service bots, but you want to make sure everyone working on it will be reminded of them.
Also, if you’re using a language where you can choose to address your interlocutor with a second and third person, like Dutch, that’s something you want to define. I also worked with a company once which didn’t want to use the word “problem” at any point in any conversation: that was also something we made sure to include in the style guide.
Generally, you’ll want to incorporate anything related to the rules that make the interaction work, e.g., the amount of buttons or knowing the “Yes” button should (generally speaking) always come before the “No” button, and the target.
After you’ve covered style guide and persona, what other things you work on will you document?
We’d start drafting the flows and we’d usually do that directly within the platform, where we always keep the content. Just recently, I’ve been giving more thought to documentation and I’ve been trying to figure out if it would make sense to be more thorough.
I also keep a change log, but that’s only really to keep track of big changes, like a whole flow or a button changing.
Another thing I’ve been doing to make sure the content is up to date is I downloaded all the articles for the QA questions and I soon plan to do the same with all the dialogues. This way I can check if all the copy is still correct and, at the same time, we make sure that the data is stored in more than one place.
What would you change about the way you’ve been approaching documentation so far? Why?
One thing to know about the company I’m working with is that it operates in cycles, which means that in different moments of the year all customers will be in a certain phase of their journey (e.g., Discovery, Buying, Waiting for delivery). For every phase, the chatbot is adapted to suit the needs the customers have in that phase.
In my opinion it's good practice to document per phase which flows were most popular, what topics were offered and what we’d like to improve on. This way, next year, we will know what we did in each phase.
I know that a company in the Netherlands created copies of their flows on draw.io. Whenever they wanted to make changes, they’d make a copy of the copy of the flow with the updated change, so that they’d always have a track of the different versions. That seems useful, but it’s also a lot of work and I’m not sure it would have been worth getting into for us.
You mentioned you’re going through the process of downloading data from the platform you use. Is vendor lock-in something you’re concerned about?
That's definitely something we’re talking about. Making data dumps is a way to prevent losing all you have stored in your current platform. Still, usually when a company decides to switch providers, they would have some time to make sure the data is transferred to the new one, if technically possible.
You’re also asking to be involved in the UI of the chatbots you work on. How do you keep track of the decisions you make in that area?
Indeed, I'm usually the one in charge of working with the company’s design department and deciding how the chatbot’s interface should look like. Usually, I’m planning changes to make the UI more accessible to visually impaired users and these will be tracked within the change log I talked about before.
Icons also do a lot for the chatbot’s usability, but not always are the ones already available helpful or usable. For example, I once was trying to use an arrow that was already in the company’s design system for the “send” button, but that one was actually used elsewhere as a location marker. This meant I had to ask the design team to create a new arrow for the chatbot.
Chatbots should really have their own icons and these should be documented in their own section in the company’s style guide, or design system. Too often companies will try to fit us in a system that already exists, but that isn’t always enough. A typing indicator, for example, is something crucial to have for a chatbot to manage user’s expectations, but that’s something you only ever use for conversational interfaces.
This is not to say that other departments shouldn’t be able to use tools from the conversational team’s section of the system, as, of course, they should. The point is that we shouldn’t have to rely only on what’s already there and we need to develop our own.
Who is the documentation targeted towards? How do you make sure it’s up to date?
This is a good point: as long as I am the only one that accesses the documentation, it doesn't matter what kind of headlines it has or what kind of topic titles are used, but most documentation needs to be accessible to -and used by- all members of the conversational team.
Of course, the conversational team has a lot of contact with other stakeholders in the business, but very rarely will they want to engage with our documentation. So, we don’t have to think about making it accessible to others.
Making sure the documentation is up to date is equally important. When I was working with a theme park in the Netherlands, I explicitly assigned one of the people in the team to make sure that the persona stays well up to date. It’s a small thing, but it has to be done.
As a designer who works with different companies, what documentation do you find most helpful to consult in your own “onboarding” phase?
When I started my collaboration with the company that I'm working for now, I found it really helpful to consult their Q&A documents they use for onboarding new customer service employees. This really helped me to understand the company’s approach.
Brand guidelines and the UI style guide are also important reads, for example, to guide us in defining the tone of voice. It doesn’t always mean the chatbot’s tone of voice should align 100% with the guidelines.
If a company usually uses the third person to address the user, I’ll usually not stick to this for the chatbot. In the company I’m currently working with, everyone is now actually switching to the second person, following the lead of the chatbot team. That also happens sometimes.
Have you already experimented with LLM-related documentations?
Just recently, I’ve been experimenting with ChatGPT, feeding it customer reviews from Trustpilot and asking it to create a SWOT analysis and I made a presentation for my colleagues to report on it.
These are the things I included in the slides:
- A description of what my goal and expected results were
- The outcome of the SWOT analysis and what ChatGPT identified as positive and negative sides
- Where the data I fed it came from (i.e., Trustpilot)
- What tool I used (i.e., ChatGPT)
- The incoherencies I was able to spot in the responses. For example, I knew I had fed it 200 lines of data, but when asked it would insist that they were 184. Also, it cited “food and ambience” as some of the aspects customers appreciated the most, but these weren’t present in the data and we don’t work with food at all.
It’s important to document all the mistakes to highlight how important it is to be careful around generated analyses and take everything with a grain of salt. Aside from that, I will also include the actual prompt in the presentation, especially after I’ve optimized it. This way, my colleagues with less experience with, f.i., ChatGPT will also be able to get helpful results with the least amount of interactions.
Inspired to try creating your own AI agent?
And if you’re still hungry for knowledge, follow us on LinkedIn for weekly updates on the world of conversational AI, or check out our hacks to keep 9 of the most pernicious cognitive biases in check.