男女羞羞视频在线观看,国产精品黄色免费,麻豆91在线视频,美女被羞羞免费软件下载,国产的一级片,亚洲熟色妇,天天操夜夜摸,一区二区三区在线电影
Global EditionASIA 中文雙語Fran?ais
Comment

China, US can compete and cooperate on AI

By Daniel Castro | China Daily | Updated: 2025-11-24 00:00
Share
Share - WeChat
The author is vice-president of the Information Technology and Innovation Foundation in the US and director of its Center for Data Innovation.

Both the United States and China have made artificial intelligence (AI) a national priority. While Washington has launched its AI Action Plan to accelerate AI innovation and adoption across the US economy, Beijing has identified AI as a central component of its strategy for developing "new quality productive forces", a goal emphasized in the upcoming 15th Five-Year Plan (2026-30) period. The two countries are competing head-to-head for technological leadership, not only in model development but also in critical AI-enabled applications such as biotechnology, advanced materials and robotics.

This competition to lead the next technological frontier is natural and healthy because it drives progress, efficiency and scientific discovery. But competition in capabilities does not preclude cooperation on guardrails.

AI safety — the effort to ensure that systems work as intended and do not create dangerous spillover effects — should be an area for limited but deliberate cooperation between the world's two AI superpowers. The reason is straightforward: some AI failures stay within borders, but others do not.

An unsafe autonomous vehicle is a domestic problem. If a self-driving car malfunctions in Shenzhen or San Francisco, the damage is local. Each country can handle those risks through its own regulations and liability systems. The same goes for biased algorithms, privacy issues or the use of deepfakes in domestic politics.

But certain categories of AI risk have negative externalities that cross borders. A model that makes it easy to design a biological or chemical weapon or automate cyberattacks doesn't just endanger the country it was built in — it endangers everyone across the globe. These are strategic safety issues, not commercial or consumer concerns. Neither the US nor China benefits if the other side makes a mistake in handling them. A major misuse or technical failure would invite global backlash, pressure for sweeping restrictions and potentially duplicative testing requirements by third countries that slow progress for both sides.

That's why both countries should cooperate, maybe not on AI regulation, but on research and data related to risk detection, evaluation and incident response. Understanding how cutting-edge frontier models can be repurposed for harmful applications, or how they can fail in ways that cascade through digital systems, requires substantial experimentation and technical analysis. Both sides already invest in this kind of work domestically. Joint efforts and more information sharing could reduce redundancy, improve coverage and clarify which risks require containment measures.

This does not mean shared rules or harmonized laws. The US and China will continue to take different policy paths based on their own institutions and political systems. But the underlying science of AI safety — how models behave, how they can be stress-tested and how incidents can be identified and analyzed — does not need to be duplicated in isolation. Shared baselines make everyone's work more efficient and reduce unnecessary fragmentation.

There are models for this kind of cooperation. During the Cold War, US and former Soviet Union scientists engaged in lab-to-lab collaboration on nuclear material security and reactor safety. The two governments remained geopolitical rivals, but their scientific institutions found opportunities to share technical methods to prevent accidents. The logic was simple: when the safety risks affect everyone, preventing accidents is in everyone's interest. The same logic applies to AI. As these systems become more capable and widely available, ensuring their safety becomes a matter of shared security, not national preference.

A practical path forward would begin with shared incident tracking and vulnerability reporting. When an AI system violates safety expectations, such as producing malicious code, those events should be documented and communicated through technical research channels. Researchers can compare data on failure modes, benchmark evaluation tools and identify where new testing methods are needed.

Another step would be joint red-team exercises — controlled tests where researchers deliberately probe advanced models for misuse potential. These could be conducted under academic or multilateral frameworks with strict intellectual property protections. Cooperation could extend to research on detection and containment techniques — how to prevent models from being modified to bypass safeguards, how to identify model leaks and how to evaluate the security of model hosting environments. None of this work requires trust or political alignment, only technical competence and coordination.

Many global AI governance initiatives mistakenly assume that countries will converge on a common approach. History suggests otherwise. Nations have long made different moral and legal choices in emerging technologies, such as genetically modified crops, gene editing and stem-cell research. There was never a single global treaty governing those technologies. Instead, countries adopted their own rules.

This approach balances realism with responsibility. The US and China will continue to compete for leadership in AI innovation and commercial deployment in global markets. But they can also recognize that preventing cross-border harm from unsafe AI systems is in the interest of both nations. Neither country benefits if accidents undermine confidence in the technology itself.

Strategic AI safety — ensuring that advanced AI remains stable, predictable and secure — should be treated as a shared goal, much like nuclear reactor safety or pandemic surveillance. The competition to build more capable systems will continue. But cooperation to prevent cross-border harm is simply common sense.

The views don't necessarily reflect those of China Daily.

Today's Top News

Editor's picks

Most Viewed

Top
BACK TO THE TOP
English
Copyright 1994 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
License for publishing multimedia online 0108263

Registration Number: 130349
FOLLOW US
主站蜘蛛池模板: 疏勒县| 泊头市| 福泉市| 玉田县| 玉林市| 海门市| 来凤县| 茂名市| 富川| 多伦县| 邵武市| 板桥市| 陆丰市| 怀仁县| 大兴区| 蓬安县| 荔浦县| 榕江县| 尼勒克县| 紫阳县| 景泰县| 金沙县| 宕昌县| 蓝山县| 长顺县| 张家口市| 古丈县| 新化县| 镇巴县| 峡江县| 双峰县| 漠河县| 昌宁县| 锦州市| 宁陕县| 昌平区| 南川市| 望江县| 云龙县| 两当县| 亳州市| 洛南县| 迁安市| 重庆市| 天气| 泰安市| 宜良县| 佛教| 彭州市| 惠东县| 秭归县| 静乐县| 和龙市| 高尔夫| 鄂伦春自治旗| 习水县| 绥化市| 华坪县| 如皋市| 阳江市| 贡山| 高陵县| 仁布县| 婺源县| 桦甸市| 怀来县| 新晃| 定西市| 烟台市| 乡宁县| 博罗县| 永德县| 阳泉市| 吉隆县| 镇雄县| 彰武县| 林西县| 洛阳市| 徐水县| 正宁县| 甘孜县| 仪陇县|