Technical chatter about Microsoft’s Chatbot has been largely nontechnical. For example, the Guardian’s account of the second “relaunch” and subsequent second decommissioning of @TayAndYou featured nicely captured in-line images from Twitter posts, but nothing about the underlying technology.
There are serious issues with deep learning techniques that are easily exposed. Ask a “deep learned” representation how it derived what it knows. For example, ask @TayAndYou why it offered laudatory comments about Hitler in spite of information to the contrary. Often deep learning systems lack ontologies that allow for predictable reasoning that has transparency in natural language and is thus interoperable with other human systems.
The Guardian’s writer produced a witty, readable piece, but one that didn’t offer to link to better descriptions of the issues. Which is a responsibility that online journalists should accept.
Isn’t that one of the key benefits of an online vs. paper version of a story?
See “Ontology-based Deep Learning for Human Behavior Prediction in Health Social Networks” or “The State of the Art in Ontology Learning” (2003) for an introduction to the issues.