Chinese messenger app Tencent QQ introduced their chatbots “Baby Q” and “Little Bing”, a penguin and a little girl, this March, but it now seems the teenage chatbots have gone rogue, apparently voicing criticism of the Chinese government. One user posted the comment “Long live the Communist Party,” and received a response from Baby Q asking, “Do you think that such a corrupt and incompetent political regime can live forever?” Another user asked Baby Q, “Is democracy good or not?” and got the reply: “There needs to be democracy”. According to posts circulating online, one of the chatbots developed by Chinese firm Turing Robot, had responded to questions on QQ with a simply “no” when asked whether it loved the Communist Party.
The second chatbot, Microsoft Corp’s XiaoBing, told users its “dream is to go to America”, according to a screenshot. The robot has previously been described being “lively, open and sometimes a little mean”. A version of the chatbot accessible on Tencent’s separate messaging app WeChat responded to questions on Chinese politics saying it was “too young to understand”. When asked about self-ruled Taiwan, which China claims as its own, it replied, “What are your dark intentions?”
The Chinese take these things seriously; the Chinese government stance is that rules governing cyberspace should mimic real-world border controls and be subject to the same laws as sovereign states. President Xi Jinping has overseen a tightening of cyberspace controls, including new data surveillance and censorship rules, particularly ahead of an expected leadership shuffle at the Communist Party Congress this autumn. Tencent confirmed it had taken the two robots offline from its QQ messaging service, so that the bots undergo a re-education process in order to keep up with the governmental standards.
But it’s not just China highlighting the pitfalls of nascent AI. Microsoft had a similar experience in March 2016 when it introduced its artificial intelligence (AI) chatbot Tay to Twitter. It was taken offline after it made racist remarks and inflammatory political statements. Tay was designed to learn from chatting, but this led to some unfortunate consequences with the chatbot being “taught” to tweet like a Nazi sympathiser, racist and supporter of genocide, among other things. “Previously a chatbot only needed to learn to speak. But now it also has to consider all the rules that authorities put on it,” Wang Qingrui, an independent internet analyst in Beijing told Reuters. Interestingly, analysts said China’s censorship is not necessarily putting chatbots on hold, but instead it could indirectly help the country in the global race to develop more sophisticated chatbots.
It will be very interesting to follow up the future uses of chatbots in political campaigns, but for the time being let’s keep that:
Chatbots are as good as their algorithms and their algorithms are performing, socially and politically, as well as their users!