Microsoft and the learnings from its failed Tay artificial intelligence bot
The tech giant’s Cybersecurity Field CTO details the importance of building artificial intelligence and machine learning with diversity in mind.
In March 2016, Microsoft sent its artificial intelligence (AI) bot Tay out into the wild to see how it interacted with humans.
According to Microsoft Cybersecurity Field CTO Diana Kelley, the team behind Tay wanted the bot to pick up natural language and thought Twitter was the best place for it to go.
“A great example of AI and ML going awry is Tay,” Kelley told RSA Conference 2019 Asia Pacific and Japan in Singapore last week.
Tay was targeted at American 18 to 24-year olds and was “designed to engage and entertain people where they connect with each other online through casual and playful conversation”.