男女羞羞视频在线观看,国产精品黄色免费,麻豆91在线视频,美女被羞羞免费软件下载,国产的一级片,亚洲熟色妇,天天操夜夜摸,一区二区三区在线电影
Global EditionASIA 中文雙語Fran?ais
China
Home / China / National affairs

Framework seeks to keep AI in line

Rapid development of technology presents potential safety risks

By JIANG CHENGLONG | China Daily | Updated: 2025-11-13 08:49
Share
Share - WeChat
SHI YU/CHINA DAILY

In a move reflecting the fast-paced breakthroughs in artificial intelligence, on Sept 15, China released its upgraded AI Safety Governance Framework 2.0.

The latest framework signals a significant strategic evolution from its predecessor, shifting from a static list of risks to a full life cycle governance methodology.

It comes just a year after the first framework was released by the National Technical Committee 260 on Cybersecurity, China's key body responsible for cybersecurity standardization.

In its preface, the new iteration notes that the update was driven by breakthroughs in AI technology that had been "beyond expectation". These breakthroughs include the emergence of high-performance reasoning models that drastically increase AI's intellectual capabilities, and the open-sourcing of high-efficacy, lightweight models, which have strongly lowered the barrier to deploying AI systems.

At the same time, the manifestations and magnitude of AI security risks — and people's understanding of them — are evolving rapidly.

The core objective has evolved from simply preventing risks to ensuring technology remains under human control, according to Wang Yingchun, a researcher at the Shanghai Artificial Intelligence Laboratory, who called the move a "major leap" in governance logic.

In a commentary published on the official website of the Cyberspace Administration of China, he emphasized that the framework aims to guard the bottom line of national security, social stability and the long-term survival of humanity.

Preventing loss of control

The significant shift in the framework involves the introduction of a new governance principle, compared with version 1.0, which focuses on trustworthy applications and the prevention of the loss of control, Wang said.

This principle is supported by the framework's new addendum listing the fundamental principles for trustworthy AI, which mandates ultimate human control and value alignment.

Hong Yanqing, a professor specialized in cybersecurity at the Beijing Institute of Technology, said in a commentary that the newly added principle is intended to ensure that the evolution of AI remains safe, reliable and controllable. It must guard against runaway risks that could threaten human survival and development, and keep AI firmly under human control, he said.

Reflecting this high-stakes focus, the new framework lists real-world threats that directly impact human security and scientific integrity. It includes the loss of control over knowledge and capabilities of nuclear, biological, chemical and missile weapons.

It explains that AI models are often trained on broad, content-rich datasets that may include foundational knowledge related to nuclear, biological and chemical weapons, and that some systems are paired with retrieval-augmented generation tools.

"If not effectively governed, such capabilities could be exploited by extremists or terrorists to acquire relevant know-how and even to design, manufacture, synthesize and use nuclear, biological and chemical weapons — undermining existing control regimes and heightening peace and security risks across all regions," the framework said.

Derivative societal risks

For the first time, the new framework introduces a category of risk involving derivative safety risks from AI applications, sending warnings to potential systemic risks AI applications could bring to macro social systems.

The framework warns that AI misuse could disrupt labor and employment structures.

"AI is accelerating major adjustments in the forces and relations of production, restructuring the traditional economic order. As capital, technology and data gain primacy in economic activity, the value of labor is weakened, leading to a marked decline in demand for traditional labor," it said.

The framework also cautions that resource supply-demand balances may be upset, stressing that some problems that emerged in AI development, like disorderly construction of computing infrastructure, are accelerating consumption of electricity, land and water, posing new challenges to resource balance and to green, low-carbon development.

The framework even warns that AI self-awareness cannot be ruled out in the future, with potential risks of systems seeking to break free of human control.

"In the future, it cannot be excluded that AI may experience sudden, beyond-expectation 'leaps' in intelligence — autonomously acquiring external resources, self-replicating, developing self-awareness and seeking external power — thereby creating risks of vying with humanity for control," the framework said.

The framework also warns that AI may foster addictive, anthropomorphic interactions. "AI products built on human-like interaction can lead users to form emotional dependence, which in turn shapes their behavior and creates social and ethical risks," it said.

Moreover, the existing social order could be challenged, it added, noting that AI's development and application are "bringing major changes to tools and relations of production, accelerating the restructuring of traditional industry models, and upending conventional views on employment, childbirth, and education — thus challenging the traditional social order".

Researcher Wang said the newly added section goes beyond familiar safety topics such as "harmful content" and "cognitive confrontation", bringing social structures, scientific activity and humanity's long-term survival and development into the scope of AI safety governance.

The higher-lever aim, he said, is to hold the bottom line of national security, social stability and the long-term continuity of humankind.

China solution

Amid global competition and cooperation in AI, the framework not only supports the healthy development of China's AI sector, but also signals China's firm resolve to safeguard AI security and ensure AI benefits humanity, according to Hong, the professor at the BIT.

Version 2.0 aligns concrete measures with international governance practice, he said, adding it emphasizes labeling and traceability for AI-generated content, which is in line with the approaches of the United States and the European Union for regulating deep-synthesis media, according to the expert.

Beyond AIGC labeling and traceability, in the safety guidance, the framework also calls for deploying deepfake detection tools in scenarios such as government information disclosure and judicial evidence collection, which will be used for source verification and cross-checking of information suspected to be generated by large models.

"These measures demonstrate China's openness and willingness to cooperate in global AI governance," Hong said.

Internationally, there has been unprecedented attention on AI safety governance, he said, adding that countries and international organizations roll out initiatives and rules in quick succession.

"By further aligning with international norms through Framework 2.0, China is responding to consensus concepts such as trustworthy AI and AI for good", said Hong.

In addition, the nation is also matching international best practices in content labeling and governance guidelines, offering a China solution to global AI governance, he added.

Top
BACK TO THE TOP
English
Copyright 1995 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
License for publishing multimedia online 0108263

Registration Number: 130349
FOLLOW US
 
主站蜘蛛池模板: 陆川县| 堆龙德庆县| 汉中市| 手机| 龙江县| 宁安市| 松桃| 安庆市| 敖汉旗| 琼海市| 佛坪县| 普宁市| 江西省| 留坝县| 台湾省| 宁城县| 潜山县| 朝阳市| 浮梁县| 依兰县| 巴里| 曲水县| 怀宁县| 宜宾市| 禹城市| 广灵县| 包头市| 钟祥市| 天气| 洛扎县| 梁河县| 湘乡市| 蒙阴县| 江华| 会理县| 龙胜| 周宁县| 正镶白旗| 荣昌县| 临猗县| 伊吾县| 济阳县| 札达县| 云龙县| 沙湾县| 新野县| 香河县| 分宜县| 杨浦区| 大同县| 翼城县| 德令哈市| 紫云| 涿州市| 眉山市| 尉犁县| 孟村| 黄山市| 信阳市| 腾冲县| 宁陕县| SHOW| 瑞丽市| 荔波县| 梅州市| 宁乡县| 西丰县| 榆中县| 犍为县| 阿尔山市| 肥乡县| 新沂市| 额敏县| 平和县| 东方市| 达日县| 西和县| 浙江省| 浮山县| 正宁县| 绵竹市| 遂宁市|