(2018-03-22) Interview Why Chatbots Continue To Fail Us

Amber Case Interview: Why Chatbots Continue To Fail Us

My grandfather worked on AI and my dad worked on voice-concatenated speech systems for telecoms. I grew up testing the interfaces and the chat systems, and then I grew up in a smart home, and then later made a voice assistant chatbot in IoC and got to see that evolve over four years, and then got four years of extra history on that.

we’re awful. We’re in the exact same spot we’ve always been. It’s a scam.

We start with Joseph Weizenbaum

He makes ELIZA the chatbot

The reason why it works is because he’s just put in the most stereotypical, worst questions that a psychologist asks, like “How are you feeling?” and “How do you feel about that?” and “How about your mother?” “Tell me about your parents.”

What it ends up doing is providing a non-intrusive, nonjudgmental interface for people to write to themselves. It’s a series of self-journaling props, basically. It’s just a fill-in-the-blank Mad Lib, if you think about it.

The bot is never going to offer any insights. The bot is just going to get the imagination within your mind to act better than it would’ve if you were to be stuck in a mental loop.

You don’t need to have anything high-tech. You just need to have some blocks and the kid’s imagination will take care of the rest. For bots, you just need to have very simple responses, not necessarily interactive in the world like a reactive chat interface, and that will give you more than if you overbuilt a thing.

when they spew sentences that sound like a human, we have expectations that we can talk back to them on a human level

It gives us no information about how dumb these things are

How do these things help us? By giving us information at the right time, by allowing us to have choices, by bubbling up important details that we might need in a way that allows us to just glance ambiently and maybe request more with some key terms.

For a really good customer service automation system, for instance, most of the data from what people have asked in the past will be written down, transcribed and tagged with an information architect involved, not a data science system. As an information architect, they know how to categorize information. Data science is just something you throw at something when you’ve taken the wrong data and you have too much money you can overpay for them.

If something is not in the system, it automatically connects to a real-life person who gets in the chat history so they can alleviate the problem. That’s the hybrid approach. I wanted to make a bot that did this for me.

If somebody asked a question that’s not in the database, it should send me a text. I should be able to text back the response, and the bot should be able to get it or store a variety of responses.

Cybernetic interaction or feedback loop. The best stuff handles 70% on Google Search and then gives us back 30%. We choose from that. A bot that makes a choice for us can fail disastrously.


Edited:    |       |    Search Twitter for discussion

No twinpages!