Artificial Intelligence

AI allows chatbots to access robust language database

24th January 2018
Enaie Azambuja
0

Before coming to MIT, Jeff Orkin SM ’07, PhD ’13 spent a decade building advanced, critically acclaimed AI for video games. While working on F.E.A.R., a survival-horror first-person shooter game, he developed AI that gave computer-controlled characters an unprecedented range of actions. Today, more than 10 years later, many video game enthusiasts still consider the game’s AI unmatched, even by modern standards.

But for Orkin, the game’s development inspired a new line of interest. “A big focus of that game was getting squads of enemy characters to work together as a team and communicate constantly,” Orkin says. “That got me really into thinking about, ‘How do you get machines to converse like humans?’”

Following the game’s release in 2005, Orkin enrolled in the MIT Media Lab, where he spent the next eight years tackling that challenge. Now, through his startup Giant Otter Technologies, he’s using his well-honed AI skills to help chatbots expertly navigate tricky human conversations.

Giant Otter’s platform uses AI algorithms and crowdsourced annotators to build a natural-language database, compiled “bottom-up,” from archived sales and customer support transcripts. Chatbots draw on this robust database to better understand and respond, in real time, to fluctuating, nuanced, and sometimes vague language.

“[The platform] was inspired by the way episodic memory works in the human mind: We understand each other by drawing from past experiences in context,” says Orkin, now Giant Otter’s CEO. “The platform leverages archived data to understand everything said in real time and uses that to make suggestions about what a bot should say next.”

The startup is currently piloting the platform with e-commerce and telecommunication companies, pharmaceutical firms, and other large enterprises.

Clients can use the platform as a “brain” to power a Giant Otter chatbot or use the platform’s conversation-authoring tools to power chatbots on third-party platforms, such as Amazon’s Lex or IBM’s Watson. The platform automates both text and voice conversations.

Benefits come in the form of cost savings. Major companies can spend billions of dollars on sales and customer support services; automating even a fraction of that work can save millions of dollars, Orkin says. Consumers, of course, will benefit from smarter bots that can more quickly and easily resolve their issues.

In conversation, people tend to express the same intent with different words, potentially over several sentences, and in various word orders. Unlike other chatbot-building platforms, Giant Otter uses “human-machine collaboration,” Orkin says, “to learn authentic variation in the way people express different thoughts, and to do it bottom-up from real examples.”

Giant Otter’s algorithms comb through anywhere from 50 to 100 transcripts from sales and customer support conversations, identifying language variations of the same intent, such as “How can I help you?” and “What’s your concern?” and “How may I assist you?” These are called “utterances.” All utterances are mixed around into chunks of test scripts for people to judge for accuracy online.

Consider a script for a sales call, where a salesperson is selling a product while the prospect is pushing for a discount. Giant Otter’s algorithms match and substitute one utterance in one script with a similar one from another script — such as swapping “I may be able to offer a discount” with “I’ll see if I can reach your price point.”

That version is uploaded to Mechanical Turk or another crowdsourcing platform, where people will vote a “yes” or “no” if the substituted sentence makes sense.

In another human task, people break conversations into “events.” Giant Otter will lay out conversations horizontally and people will label different sections of the conversation.

A salesperson saying, “Hello, thanks for contacting us,” for instance, may be labeled as “call opening.” Other section labels include “clarifying order,” “verifying customer information,” “proposing resolution,” and “resolving issue.”

“Between these two tasks, we learn a lot about the structure over how conversations unfold,” Orkin says. “Conversations break down into events, events break down into utterances, and utterances break down into many different examples of saying the same thing with different words.”

This builds a robust language database for chatbots to recognize anywhere from a few to more than 100 different ways to express the same sentiment — including fairly abstract variations.

This is important, Orkin says, as today’s chatbots are built top-down, by a human manually plugging in various utterances. But someone seeking an order status update could say, for instance, “My order hasn’t come, and I checked my account, and it said to contact customer support.”

“Nowhere does the person even say ‘status.’ If I was creating content for a bot by hand, there’s no way I would have thought of that,” Orkin says. The platform continues to learn and evolve after chatbots are deployed.


Discover more here.

Image credit: MIT.

Featured products

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier