China implemented its latest limits on AI-generated content this week,
a watered-down version of stronger draft guidelines that attempt to retain the country in the AI race while maintaining rigorous censorship on internet content.
Rapid advances in generative AI have sparked widespread concern about the technology’s potential for distortion and misinformation, with deepfake images depicting people saying things they never said.
Since the introduction of San Francisco-based OpenAI’s ChatGPT, which is banned in China, Chinese companies have hurried to develop artificial intelligence systems that can mimic human speech.
The 24 new guidelines appear to be reduced from severe draft restrictions announced earlier this year, according to experts, as Beijing wants to promote local newcomers into the US-dominated market.
Here’s what you need to know about Beijing’s regulations governing public services:
Ethics in AI
According to the criteria, generative AI must “adhere to the core values of socialism” and avoid from endangering national security or supporting terrorism, bloodshed, or “ethnic hatred.”
When building algorithms, service providers must designate AI-generated material as such and take steps to prevent gender, age, and racial prejudice.
Their software should not generate content with “false and harmful information.”
Individuals must grant consent before their personal information can be used in AI training, and AI programs must be taught on legally obtained data sources that do not infringe on the intellectual property rights of others.
According to the rules announced in July by Beijing’s cyberspace authority, companies developing publicly available generative AI software must “take effective measures to prevent underage users from excessive reliance on or addiction to generative AI services.”
They must also develop procedures for the public to report improper content and erase any illegal content as soon as possible.
The rules require service providers to conduct security assessments and submit filings on their algorithms to the authorities if their software is judged to have an impact on “public opinion” — a step back from a requirement in earlier draft rules that required security assessments for all public-facing programs.
Technically, the guidelines are “provisional measures” subject to the terms of pre-existing Chinese laws.
They are the most recent in a series of laws aimed at various parts of AI technology, including deep learning guidelines that went into effect earlier this year.
“From the outset, and somewhat differently than the EU, China has taken a more vertical or narrow approach to creating relevant legislation, focusing more on specific issues,” noted partners at international law firm Taylor Wessing.
While an earlier draft of the regulations indicated a fine of up to 100,000 yuan ($13,824) for transgressions, the latest version states that anyone who violates the guidelines will be issued a warning or face suspension, with more severe punishment imposed only if they are proven to be in violation of actual laws.
“Chinese legislation falls somewhere between the EU and the US, with the EU taking the most stringent approach and the US taking the most lenient,” Angela Zhang, associate professor of law at Hong Kong University, told AFP.
While an earlier draft of the rules was partly aimed at maintaining censors’ strict control over online content, several restrictions on generative AI that had appeared in an earlier draft regulation had been softened, according to Jeremy Daum, Centre.
“Many of the strictest controls now yield significantly to another factor: promoting development and innovation in the AI industry,” stated Daum on his China Law Translate blog.
The restrictions’ reach has been drastically reduced to apply exclusively to publicly available generative AI programs, excluding research and development uses.
“The shift could be interpreted as Beijing’s acceptance of the idea of an AI race in which it must remain competitive,” Daum added.