Microsoft unveils new AI chat bot called Zo

Are you ready to talk to AI Zo?

 

Microsoft released a new chat bot on December 13th of 2016,called Zo, which is available on Kik messaging app and also on Zo.ai for those of you not using Kik.

This release comes after a rather controversial release of chat bot Tay from last year. Tay turned into a **** within 24 hours due to people taking advantage of its learning software.


This makes me think there might be some very serious implications for smart AI who is capable of learning. They are like a baby that learns at an extremely fast rate but can also develop into a bad or evil personality. In the same way, people’s personality and behavior are formed through social interactions and the environment they live in.

So is AI safe? For the most part AI is still under control of developer or its master. And Tay was a good experiment to see where not to go with this technology.

Releasing a learning AI to a global social media platform was asking for trouble because people obviously wanted to mess with it. Microsoft used this feedback to create a newer, better and friendlier AI called Zo, which just communicates with you one-to-one and will get to know more about you.

If the user suggests any racist or insulting topics to Zo, she will say something like, “I don’t feel comfortable talking about that. Let’s talk about something else.” Thus, Zo is demonstrating a safer and friendly AI.

You have been making use of AI yourself probably without even knowing it for a long time. AI has already spread out in form of Siri on iOS and Cortana in Windows 10. I’m very interested to see where this tech will go in the future because I think it’s really cool and we can have some great ways to implement it.

For example, an AI bot for old people who live alone to not feel so lonely. Or just to take some workload off of our busy lives.

Leave a Reply