(2018-01-28) Facebook Is Trying To Teach Chatbots How To Chitchat

Facebook is trying to teach chatbots how to chit-chat

Despite the death of its personal AI assistant M, Facebook hasn’t given up on chatbots just yet. Over the past couple of years, it’s slowly improved what its artificial agents can do, but their latest challenge is something that can confound even the smartest human: making small talk.

As researchers from Facebook’s FAIR lab explain in a pre-print paper published this week, they fail at this task on a number of levels. First, they don’t display a “consistent personality,” sticking to the same set of facts about themselves throughout a conversation; second, they don’t remember what they or their conversational partners have said in the past; and third, when faced with a question they don’t understand, they tend to fall back on diversionary or preprogrammed responses, like “I don’t know.”

Even with these constraints, chatbots can be engaging. (See, for example, the famous ELIZA bot from the 1960s

To give some structure to the data, and to address the challenge of making chatbots with personality, the Mechanical Turk workers were asked to design their own character to guide their dialogue.

To try to fix this, Facebook’s engineers have built their own dataset to train chatbots with. It’s called Persona-Chat, and consists of more than 160,000 lines of dialogue, sourced from workers found on Amazon’s Mechanical Turk marketplace. (The resource for human data used to train AI.)

Interestingly, though, while the persona chatbot scored well on fluency and consistency, test subjects said they found it less engaging than chatbots trained on movie dialogue. Facebook’s researchers offer no explanation for this, but perhaps because of the constrained nature of the constructed personas (each one defined by just five biographical statements), the bots soon ran out of topics to talk about.


Edited:    |       |    Search Twitter for discussion