Session Length measures the amount of time a user engages with a chatbot. While the optimal Session Length will vary from chatbot to chatbot, the metric can act as a warning system that users aren’t engaging with the chatbot in the intended manner.
If Session Length is too short, it means that users are navigating away from the chatbot before it has a chance to solve their problem. If it’s too long, the chatbot is not efficient enough and will frustrate users. The Session Length metric alerts you to instances in which the chatbot is failing to handle enquiries within the appropriate window of opportunity.
Like Session Length, Steps Per Conversation provides us with a measure of efficiency. It tells us how many steps (one step is equivalent to one question and one response together – a conversational back and forth) it takes to resolve an enquiry.
We want chatbots to strike a balance between a more informal, conversational tone and efficient self-service. This means ensuring that they’re able to provide a solution within a reasonable timeframe. If it takes the chatbot too long to do so, users will avoid the technology in the future.
At the end of each chatbot interaction, users should be asked to provide feedback. This may be in the form of a simple thumbs up or thumbs down. Alternatively, you could utilise a 1-5 rating system or allow for written comments.
Feedback systems are the most efficient means of finding out what your chatbot is doing well and what users consider its key flaws. When feedback is linked to specific interactions, you can quickly and efficiently return to those conversations to identify areas where the service can be improved.
Fallback Response Rate measures the number of times a chatbot is unable to provide an appropriate response to a user enquiry. This could be because it doesn’t understand the question or the language used, or because it hasn’t been designed to process that particular enquiry.
Fallback Response Rates tell us a lot about how a chatbot can be improved. If it is repeatedly stumped by the same question, it may show there’s a problematic gap in its knowledge. On the other hand, it may mean that your chatbot doesn’t understand all of the ways the question can be expressed.
The ultimate purpose for many chatbots is to automate relatively simple and routine enquiries. In such instances, it’s useful to measure the number of users that successfully self-serve and do not require human help.
Self-Serve Rate helps us measure the extent to which a chatbot is fulfilling its primary functions and to generate accurate data for your organisation to calculate ROI. It is worth noting that organisations will have different expectations for this metric, as some chatbots will have other functions, such as routing users to a suitable human agent.
User Retention measures the number of users that return to interact with a chatbot after their initial conversation. It allows us to see how many people use the service on a repeat basis.
This metric gives us an insight into how many users find the chatbot useful enough to consider it a means of resolving future interactions. For a chatbot to realise its true potential, users must see it as the first port of call and opt to use it before accessing other channels.
The Volunteer Users metric looks at the number of users who begin an interaction with a chatbot without having been prompted to do so. In its early stages, most users will be prompted to interact with the service in some way. However, as the tech matures, more and more users should arrive at the service without being pushed or directed there.
This metric gives us an insight into the way users are embracing the technology and whether it’s becoming a more naturally preferred customer service tool.
This measures the extent to which a chatbot converts interactions into new business. In a sales context, this will mean how many interactions result in a successful sale. In other instances, it may mean the rate at which the chatbot successfully schedules an appointment or files an application.
Conversion Rate is most useful when used to compare chatbots to human agents. While many organisations want to provide automated responses to as many enquiries as possible, it’s important to be able to prove that your chatbots aren’t costing you new business but are, in fact, performing at least as well as your human agents.
The last item on our chatbot metrics list is Average Number of Interactions. This measures how many interactions users have with your chatbot on a daily, weekly, and monthly basis. This allows you to keep track of total usage and forms the basis of a number of other metrics, such as the number of active users, engaged users, and new users.
Because you need to know whether usage is rising or falling or if you’ve hit a plateau. It acts as the base metric for all different kinds of usage and demographic measurements and is a crude (but incredibly important) means of determining whether your chatbot is as popular as it needs to be.
Though chatbots have been around for a little while, they’ve only recently begun to be deployed in large numbers.
Consequently, chatbot analytics is still a relatively young science. While the chatbot metrics listed above are currently considered the most important measurements of chatbot performance, the technology is developing at such a pace that new metrics are emerging on a regular basis. In particular, machine learning and sentiment analysis technologies are radically transforming chatbot capabilities – meaning they’re likely to have a pronounced effect on performance analysis in the near future, too. That’s why, here at Inform, we not only provide comprehensive analytics for our customers as standard, we also make sure we stay up to date with the latest developments so you can benefit from our knowledge.