How emotion analytics will impact the future of NLP

Conversational agents and chatbots struggle to understand complex human speech, including sarcasm. But that could change as NLP increasingly incorporates emotional understanding.

Humans interact more with machines every day, but sometimes those experiences can be frustrating. People often get frustrated with virtual assistants, chatbots and other natural language processing systems because they don't recognize emotions, so they cannot react in context.

For example, businesses tend to deploy chatbots and conversational agents for their customer service needs since natural language understanding and natural language processing (NLP) systems can solve common problems faster and cheaper than human employees can solve them.

However, many customer interactions fall outside the capabilities of the system. When that happens, the system may attempt to clarify what the customer wants to do -- sometimes several times -- before routing the customer to a live agent. In the meantime, the customer may have abandoned the chatbot or interactive voice response system out of sheer frustration.

Emotion as the future of NLP

"We want machines to work like us, but usually when we say that we mean that machines should behave like the parts of people that we like. We want the best of both worlds," said Jen Hsin, co-founder and head of data science at AI-powered sales platform SetSail. "The key word is context. Machines are not super good at picking up on context."

Human language is complex. People tend to behave and express themselves differently based on their different cultures or ages. But, then, people also don't always say what they mean or use sarcasm, making it difficult for AI models or even other people to understand what they are saying.

Teaching natural language processing models

Marketers use sentiment analysis to determine whether customers perceive the brand positively, negatively or neutrally. But, human emotions are nuanced and difficult to understand.

Elements of natural language processing

"It turns out that being happy or excited is not actually correlated to closing a deal in a sales context, so there's a gap between emotion and the action we care about, which in our case is closing sales deals," Hsin said. "The gap is going from knowing the emotion to knowing the intent and sometimes that gap can be very big."

Even within a single use case, it might make sense to have different models that correspond with different states of mind.

"The component that needs to be added is reacting and adapting throughout the customer's interaction, continuously gathering signals," said Hsin.

Reinforcement learning, a method of training machine learning models by rewarding desired behaviors and punishing undesired ones, works well here, Hsin said. Combined with deep learning, reinforcement learning can enable more human-like responses in conversational agents than rule-based machine learning training approaches.

With a rules-based approach, the resulting artificial intelligence model cannot do anything it wasn't programmed to do.

"You can count words which are positive or negative. It's a simple approach but not actually picking up the emotion," said Dan Simion, VP of AI and analytics at global consulting firm Capgemini. "We've made really good progress on NLP, but there's still room for improvement on emotions."

Another challenge is the evolution of language itself. Today, it's common for people to use emojis to punctuate text, but since a single emoji may have multiple meanings, NLP engines struggle to understand them.

The complexity of human communication

Meanwhile, informal text-speak has bled over into emails and social media posts. While a language change can be a relatively easy training problem, acronyms often have more than one meaning, making context important. In addition, NLP will falter with new slang words, simply because the model wasn't trained on the new definition of the word.

The key word is context.
Jen HsinCo-founder and head of data science at SetSail

"We need to have models for different states of mind, but the difficulty is understanding the customer's state of mind," Simion said. "What level of angry or frustrated am I dealing with? And based on that, I need to apply the right model based on that level."

Since NLP isn't flawless, companies should consider workarounds that provide the user with a better customer experience. For example, instead of a chatbot asking a customer to clarify what they just said over and over, a method not only inefficient but frustrating, it may be wiser to route the customer to a live agent sooner.

In theory, emotion detection would help with this, but, especially in text, it isn't a reliable signal, explained Paul Barba, chief scientist at data analytics vendor Lexalytics.

"People tend to write short, snippy things when they feel certain emotions, but people who speak English as a second language may type in a similar way. If you're not accounting for those, you can get biased results. And depending on what you're trying to use emotion detection for, that can be problematic," he said.

Bias complicates the future of NLP

People are complex. While they're constantly grouped based on demographics and personas, good marketers understand that these are blunt instruments that don't apply equally well to every individual in a group. For example, some older women buy junior clothing aimed at teenagers because the styles align better with their tastes than age-appropriate clothing.

"As humans we understand it's hard to read a person. We need to step back and say these things were never as good as humans so let's be realistic about what we as humans can do across cultures with small amounts of data without knowing a person and having moral expectations," Barba said.

Chatbot or virtual assistant logs can be used as training data, but the data should be monitored for skew and the models should be monitored for drift. Otherwise, the result could mirror Microsoft's Tay Twitter chatbot debacle in which the bot morphed from friendly to racist in just 24 hours.

"These things bring out the biases of humans, as expressed on the internet and that's a brand nightmare," said Barba.

Next Steps

Q&A: How to start learning natural language processing

Dig Deeper on AI business strategies