Launched on Twitter as an experiment in "conversational understanding" and to engage people through "casual and playful conversation", Tay (artificial intelligence chat bot) was soon bombarded with racial comments and the innocent bot repeated those comments back with her commentary to users, TechCrunch reported yesterday.

Later, a Microsoft spokesperson confirmed to TechCrunch that the company is taking Tay off Twitter as people were posting abusive comments to her.

"The AI chatbot Tay is a machine learning project, designed for human engagement. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways.

As a result, we have taken Tay offline and are making adjustments," the spokesperson added.

 Latest News from Business News Desk