Why Chatbot AI Is a Problem for China

If the technology is only as good as the information it learns from, then state censorship is not a recipe for success.

An illustration featuring digits in the shape of a yellow star on red background
Matt Chase / The Atlantic

ChatGPT, the chatbot designed by the San Francisco–based company OpenAI, has elicited excitement, some unease, and much wonderment around the world. In China, though, the U.S. bot and the artificial intelligence that makes it work represent a threat to the country’s political system and global ambitions. This is because chatbots such as ChatGPT revel in information—something the Chinese state insists on controlling.

The Chinese Communist Party keeps itself in power through censorship, and under its domineering leader, Xi Jinping, that effort has intensified in a quest for greater ideological conformity. Chatbots are especially tricky to censor. What they blurt out can be unpredictable, and when it comes to micromanaging what the Chinese public knows, reads, and shares, the authorities don’t like surprises.

Yet this political imperative collides with the country’s urgent and essential need for innovation, especially in areas such as AI and chatbots. Without continuing technological advances, China’s economic miracle could stall and undercut Xi’s aim of overtaking the United States as the world’s premier superpower. Xi is as intent on his campaign for technological progress as he is on his drive for stricter social control. The development of AI is a crucial pillar of that program, and ChatGPT has exposed how China’s tech sector still lags behind that of its chief geopolitical rival, the U.S.

“The Chinese government is very torn” on chatbots, Matt Sheehan, a fellow who focuses on global technology at the Carnegie Endowment for International Peace, told me. “Ideological control, information control, is one of, if not the, top priority of the Chinese government. But they’ve also made leadership in AI and other emerging technologies a top priority.” Chatbots, he said, are “where these two things start to come into conflict.”

Which path Xi chooses could have huge consequences for China’s competitiveness in technology. Will he permit the progress that can propel China to dominance in the global economy? Or will he sacrifice the cause of innovation to his desire to maintain his grip on Chinese society?

Those who live in open societies tend to believe that free thinking and the free flow of information are indispensable prerequisites for innovation. A corollary of this view is that a political system such as China’s, which stifles intellectual curiosity and enforces social conformity, discourages the creativity and risk-taking necessary for achieving breakthroughs. In some respects, that argument has merit. There is no Chinese Disney, for instance, and there may never be as long as the state restricts the freedom of filmmakers to tell stories and create characters. Pop culture across Asia is dominated by what the democratic societies of Japan and South Korea produce.

China’s authoritarianism already inhibits its tech sector in other ways. The Chinese video-swapping app TikTok is facing a possible ban or forced sale in the U.S. because of fears that its Beijing-headquartered parent company, ByteDance, could be pressured to give up private data on American citizens to China’s security state.

Chinese leaders do not believe innovation requires individual liberties innovation and see no contradiction between political control and high-tech aspiration. Communist autocracy has not prevented Chinese companies from emerging as leaders in sectors such as 5G telecommunications networks or electric vehicles. Nor has censorship impeded the development of technologies in the politically riskier realm of data and content. China has vibrant and inventive industries in gaming and social media.

In addition, far from suppressing potentially disruptive and subversive AI technology, the state has actively supported it. In 2017, the State Council, the country’s top governing body, released a national strategy for the sector called the “New Generation Artificial Intelligence Development Plan,” with the goal of “making China the world’s primary AI innovation center” by 2030. In his report to October’s important Communist Party congress, Xi specifically mentioned AI as one of the “new growth engines” that the country must cultivate.

Despite this high-level attention, China’s AI sector lags behind America’s—at least in the area of chatbots, as ChatGPT made all too obvious. In China, “the government, tech entrepreneurs, and investors understand how incredible ChatGPT is and they don’t want to be left behind,” Jordan Schneider, a senior analyst with the research firm Rhodium Group, told me. “To sort of be upstaged so dramatically by OpenAI and ChatGPT was a little embarrassing and is something that is certainly going to focus minds and companies and talent around closing that gap.”

The deficit appears significant. In March, Robin Li, the founder of the Chinese internet-search firm Baidu, tried to show off his own ERNIE Bot, but the demonstration—which used prerecorded results—was so disappointing that the company’s share price plunged on the Hong Kong stock exchange.

Left to themselves, the talented engineers and coders at Baidu and other Chinese AI labs will likely catch up. But the state is certain to interfere. Whatever chatbots the tech firms create will have to abide by the same restrictions on speech that China’s human residents are compelled to follow. That was made clear this month when the country’s cybersecurity watchdog issued new draft regulations for the AI sector that require chatbots to produce content in line with socialist values and not liable to subvert state power—broad categories indeed.

The government imposes such censorship on the digital world with the same blunt force it applies to the real world. An army of scrupulous censors scrub politically sensitive material from social-media platforms. Many foreign media and internet services are blocked by the Great Firewall, the digital fortification erected by the state to keep out unwanted information and ideas. Internet searches are restricted. Authorities have taken steps to prevent Chinese citizens from using ChatGPT. Regulators reportedly ordered Chinese tech firms to deny their users access.

Otherwise, ChatGPT will produce politically unacceptable—if, in all likelihood, truthful—information on such topics as Beijing’s mistreatment of the minority Uyghur community, which the state doesn’t want the Chinese public to see. The China Daily, a news outlet owned by the Chinese government, warned that ChatGPT can “boost propaganda campaigns launched by the U.S.”

Baidu’s ERNIE, available to the public on a limited basis only, simply refuses to respond to some politically suspect queries and tries instead to change the subject. (I requested access to ERNIE for this article, but have not been granted it.)

How Baidu and other chatbot providers adjust their models to adhere to the state’s censorship rules could have further negative effects. For instance, a chatbot model trained only on vetted information encircled by China’s Great Firewall is unlikely to be as effective as a foreign competitor that draws on a wider and more diverse corpus of sources. (In a recent press release, Baidu noted that ERNIE had been trained on “a knowledge graph of 550 billion facts” and other material, but when I asked for further details of the sources, the company would not comment.)

Chatbots are also potentially more difficult to censor than earlier forms of digital media. Chatbot models will analyze, collate, and connect data in unexpected and surprising ways. “The best analogy would be to how a human learns,” Jeffrey Ding, a political scientist at George Washington University who studies Chinese technology, explained to me. “Even if you are learning things from only a censored set of books, the interactions between all those different books you are reading might produce either flawed information or politically sensitive information.”

That presents special challenges to Chinese AI specialists and state censors. Even if a Chinese chatbot is trained on a limited set of politically acceptable information, it can’t be guaranteed to generate politically acceptable outcomes. Furthermore, chatbots can be “tricked” by determined users into revealing dangerous information or stating things they have been trained not to say, a phenomenon that has already occurred with ChatGPT.

This unpredictability places China’s tech sector in an unenviable position. On the one hand, researchers are under pressure to achieve breakthroughs in AI and meet the government’s targets. On the other, designing chatbots could be dangerous in a political environment that tolerates no dissent. The authorities are unlikely to look kindly on a chatbot that breaks the rules—or on the entrepreneurs and engineers designing and training it. To drive that point home, the draft regulations from the cybersecurity agency hold chatbot providers responsible for the content they produce. That alone could discourage China’s tech elite from pursuing chatbots, or at least advanced models of them that would be available to the public.

Fettering chatbots with too many constraints, however, could imperil China’s progress, as well as inhibit developments in the crucial science behind them. “Chatbots are not just a funny toy,” Sheehan, from the Carnegie Endowment, told me. “A lot of people in the deep tech of AI think this is the most promising path forward for creating more general artificial intelligence, which is kind of the holy grail of the field.” Therefore, “Chinese officials are at cross purposes on this one.”

Much will depend on what China’s leaders are willing to let slide in the name of experimentation. There are good reasons to believe they will allow some latitude. The explosion of social media in China has also posed risks to the state, as it offers Chinese citizens the power to widely share unauthorized information—videos of protests, for instance—faster than censors can suppress it. Yet the authorities have accepted this downside in order to allow new technologies to flourish.

“I do think the Chinese government is concerned about the negative, harmful effects of AI,” Ding told me. Despite “the censorship,” he added, “we’ve seen from the past track record of Chinese companies and the Chinese government that there is a way forward with respect to creating breakthrough innovations in this space.”

The Chinese government could even find ways to use chatbots to its advantage. Just as the authorities have been able to co-opt social media and employ the platforms to manipulate popular opinion, monitor the public, and track dissenters, so could a chatbot easily become a tool of social control, promoting official narratives and principles. In their recent book, Surveillance State, the journalists Josh Chin and Liza Lin write that China’s rulers believe that becoming a leader in such technologies as AI “would help the Party build a new system of control that would ensure its own well-being.”

Such an obedient, party-line chatbot—shielded from more formidable, uncensored foreign competitors behind the Great Firewall—could succeed perfectly well within China yet have little appeal outside. In that case, what China’s authoritarianism will inhibit is not technological advancement per se, but its technological competitiveness in the wider world.

Michael Schuman is a contributing writer at The Atlantic, based in Beijing, China.