Conversational Quotient — NLP, NLU and NLG for chatbots
The intelligence level of a human is assessed by IQ (intelligence quotient).
Shouldn’t a chatbot be assessed for its intelligence by using a similar quotient?
What should we measure in a chatbot to access its intelligence?
While there can be many ways to the assess the intelligence level of a chatbot, one of the most critical aspect is the ability to carry out contextual, engaging and comprehensible conversations with humans.
Employing Natural Language technology to build conversational intelligence in chatbots is at the fore front of research and companies are pouring billions of dollars to come up with ways to do that.
In this article, we will explore three such technologies:
- NLP — Natural Language Processing
- NLU — Natural Language Understanding
- NLG — Natural Language Generation
Natural Language Processing
This is the most well-known of the above three technologies. It deals with processing text messages sent by a user, breaking them into parts, adding grammatical elements and identifying interesting items.
The following are some of the common processing elements in NLP:
- Tokenization — splits a message into sentences, and a sentence into words.
- Normalization — puts all words on the same footing e.g. converting all words to upper case or lower case
- Stop Words Removal — Stop Words are frequently occurring words like ‘the’, ‘and’, ‘a’, etc. that do not contribute greatly to understanding text and so can be removed
- Stemming — eliminates affixes from words to get the word stem e.g. liking -> like
- Lemmatization — similar to stemming but can get canonical form of a word based on its lemma e.g. better > good
- POS tagging — called parts of speech tagging and assigns tags for nouns, pronouns, verbs, adjectives, adverbs etc. to words
- Bag of Words — a sentence is regarded as a multiset of words disregarding grammar and word order
- N-grams — continuous sequence of adjacent words in a sentence needed to get the meaning correctly e.g. ‘machine learning’ is a bi-gram
- TF — called term frequency and is calculated by the number of times a word occurs in a message or a sentence to signify importance of that word
- Named Entity recognition — identifying and tagging words that represent real word entities such has persons, organizations, places, dates etc.
Please note that the accuracy of the above depends on the text documents that were used for training the machine learning and statistical models for NLP.
So, a model trained with Wikipedia articles vs trained with twitter messages will give different results. Depending on the type of problem that needs to be solved using NLP, an appropriate training data set can be identified for training a model or a pre-trained model which used a similar dataset must be identified. Also, a model trained in English text cannot be used for processing other complex character language text, the model is specific to the language on which it has been trained.
Natural Language Understanding
Put simply NLU means the ability of the computer to comprehend language and falls into the area of machine comprehension. Just like a human can read a message, interpret it, understand it’s meaning, context and intent, the ability of a machine to be able to do the same is the goal.
NLU is considered as an AI hard problem — meaning solving it could give true artificial general intelligence to machines.
With respect to a chatbot one of the key NLU problem is the ability to determine ‘intent’ of a user’s message. This is otherwise referred to as intent classification.
Let’s take an example. Say a user sends the following message to a chatbot:
‘I want to travel from London to New York’
A chatbot with NLU would be able to determine this as a ‘booking intent’ i.e. the user wants to make a booking. It should also determine that this user typically travels from London to New York via flight and narrow down the intent to a ‘flight booking intent’.
If the user would have typed ‘I want to go to New York”’, the chatbot would have still determined the intent as ‘flight booking intent’ and additionally got the user’s current location as London. This is true understanding of the meaning of a message and usage of contextual information to get a deeper meaning.
NLU is achieved by using a machine learning classification algorithm, tons of training data comprising of the user messages and the correct intents, and building a model that can accurately classify the user’s intent. The model should also be trained to handle variations of messages having the same intent to handle ambiguous sentences.
Tools such as Rasa NLU can be used to easily develop NLU for a chatbot.
Natural Language Generation
The challenge with human language is that it will not confirm to a pre-defined format or script. At different points in time the same human may type different messages for the same intent. Plus, many a times humans are abstract and ambiguous in conversations. How does the chatbot handle such situations?
A chatbot possessing NLG ability would mean that the chatbot knows what exact and clear response (message) to generate for a corresponding user message.
In the above example, for the message ‘I want to travel from London to New York’,the chatbot would have to understand that for booking a flight we also need date and time of the travel and thus ask the user for a specific date of travel, time of travel, return journey etc.
However, if the user had already provided this information in the message then the chatbot should not ask for the same information again.
In the previous case the chatbot should respond like ‘Thank you Mr. ABC for your inquiry. Could you provide the date and time of travel?’
While in the second case it should respond like ‘‘Thank you Mr. ABC for your enquiry. Let me check for available flights and get back soon’
If you notice the response must be coherent, meaningful, contextual, complete, non-repetitive and clear. This mandates that the response cannot be static but must be dynamic.
How does the chatbot figure out what sort of response to generate? One of the ways this has been solved is by using a Dialog management system. For different input messages the predictive model can be trained to decide what to say next, contextually.Rasa Core is one such Dialog Management system for NLG.
Recent research in NLG has uncovered Deep Learning algorithms, especially Neural Dialog Generation Models such as Sequence to Sequence, that are more effective in predicting responses to natural language conversations. This would tremendously improve the comprehension abilities of machines and would also enable chatbots to handle question answering problems.
Looking into the future it seems certain the chatbots would truly develop intelligence comparable to humans and we would assess them using a ‘Conversational Quotient’ (CQ).
Originally published at blog.engati.com on March 5, 2018.