Microsoft released an AI chatbot to Twitter. Hilarity ensues:
Microsoft has apologised for creating an artificially intelligent chatbot that quickly turned into a holocaust-denying racist.
But in doing so made it clear Tay’s views were a result of nurture, not nature. Tay confirmed what we already knew: people on the internet can be cruel.
Tay, aimed at 18-24-year-olds on social media, was targeted by a “coordinated attack by a subset of people” after being launched earlier this week.
Within 24 hours Tay had been deactivated so the team could make “adjustments”.
But on Friday, Microsoft’s head of research said the company was “deeply sorry for the unintended offensive and hurtful tweets” and has taken Tay off Twitter for the foreseeable future.
What the hell were they thinking?
What part of 4Chan don’t you get?