From the Blogosphere
Beware the Wild Chatbot! | @ThingsExpo #AI #IoT #M2M #Chatbot
Do chatbots really need artificial intelligence? It may do more harm than good.
By: Robin Miller
May. 14, 2017 12:00 PM
Chatbots are taking over the world. They have their own magazine, their own dot-org website, and their own versions of popular "meet someone" sites. Far too often, when we chat with them we think we're chatting with humans. Do you really think all those helpful "live chat" offerings that are popping up on e-commerce sites have call centers full of actual people behind them 24/7? Maybe they do, and maybe they don't. You may not be able to tell very easily, especially if the bot behind the chat utility you're using is programmed to hand you off to a human if you need a response beyond what the bot can give you.
Illustration courtesy of Grid Dynamics
Are Chatbots Dangerous?
A smart customer can easily tell when a chatbot is a chatbot, not a real person. Even bots that have fairly sophisticated AI (Artificial Intelligence) behind them can be tripped up without too much trouble, while less-sophisticated ones, like the Amazon Alexa, are so limited that playing with their limitations can make a great game for children. A lot of Alexa's problems have to do with voice recognition, and today we're concentrating on text-based chatbots; we'll give voice-based chatbots their own article in the near future. But voice recognition isn't the only kind of problem chatbots face. Text-based chatbots can get into plenty of trouble, too.
The big difficulty with chatbots, AI, and machine learning is that AI isn't very intelligent at this point -- unless the device has been built, a la IBM's Watson, with a nearly unlimited budget. Meanwhile, machine learning sometimes works better than its makers expect. A famous example of machine learning gone awry was Microsoft's Tay, which lived on Twitter and learned to be a "Nazi" from Nazi-leaning trolls.
As one commenter on Ars Technica said, "So Microsoft created a chat bot that so perfectly emulates a teenager that it went off spouting offensive things just for the sake of getting attention?" This is exactly true. Microsoft built Tay to interact with young people on their own terms. The only problem was, it apparently interacted heavily with young people from 4Chan, who tend to delight in pranks, trolling, and putting out the alt-right party line while spewing hate speech about everyone else.
This is not how you want your branded corporate bot to behave. Maybe a little less learning ability would be better than too much, unless you have the resources to monitor your chatbots around the clock -- which defeats their primary purpose: saving on labor costs.
Three Rules for Building Safe, Useful Corporate Chatbots
Follow these three basic chatbot rules and you will have helpful chatbots and happy customers. The meta-rule behind these three rules is the traditional KISS, or "keep it simple, sailor." Recognize the fact that your chatbot will have limitations, and that you need to recognize those limitations and work within them, because fake "do it all" artificial intelligence is worse than no artificial intelligence at all.
Our next article on this topic will be about voice chatbots. Meanwhile, you may want to read this article: Chatbots in retail: 2017 is shaping up to be a big year. And if you're eager to start developing your own chatbot or want to modify an existing one for your purposes, bot Stash is an amazing resource that can take you all the way from from bot basics to advanced development, including APIs for platforms ranging from Facebook to Skype.
Latest Cloud Developer Stories
Subscribe to the World's Most Powerful Newsletters
Subscribe to Our Rss Feeds & Get Your SYS-CON News Live!
SYS-CON Featured Whitepapers
Most Read This Week