男女羞羞视频在线观看,国产精品黄色免费,麻豆91在线视频,美女被羞羞免费软件下载,国产的一级片,亚洲熟色妇,天天操夜夜摸,一区二区三区在线电影
Global EditionASIA 中文雙語Fran?ais
Opinion
Home / Opinion / Chinese Perspectives

Risk control key in AI-guided weapon system

By Liu Wei | China Daily | Updated: 2024-09-12 07:19
Share
Share - WeChat
SHI YU/CHINA DAILY

Editor's note: The booming AI industry has not only created opportunities for economic and social development but also brought some challenges. Further developing AI standards can help promote technological progress, enterprise development and industrial upgrading. Three experts share their views on the issue with China Daily.

The development of artificial intelligence (AI) and autonomous weapon systems will boost the defense capability of some countries, changing traditional strategic and tactical landscapes, giving rise to new challenges and making risk management more difficult.

Generally, the application of AI technology in the defense sector can increase the precision of attacks, but it could also lead to more misunderstandings and misjudgments, escalating disputes and confrontations. For instance, precision-guided weapon systems can strike targets more efficiently and effectively, and AI's fast decision-making can improve combat efficiency but increase the chances of contextual misjudgment.

Particularly, AI's application in information warfare, including "deepfakes", can spread false information and thus increase the chances of the "enemies" making the wrong decision, rendering conflicts more unpredictable and uncontrollable.

First, AI-guided weapon systems may misjudge and misidentify targets, mistaking civilians or friendly and non-military forces as targets, causing unnecessary deaths and destruction. Such misjudgments can provoke strong reactions or retaliation from the "enemies".

If AI-guided weapon systems cannot understand the complex battlefield environment, they could make wrong decisions. For example, AI might fail to understand the enemy's tactical intentions or background information, leading to failed action or excessive use of force.

Second, AI-guided weapon systems may lack flexibility in decision-making or fail to handle the complex dynamics of the battlefield, because of their dependence on preset rules and algorithms.

Third, AI-guided weapon systems can be vulnerable to hacking attacks and carry cybersecurity risks. For example, hackers can disrupt defense systems or, worse, manipulate them into taking wrong decision or launching attacks. In fact, the AI-guided weapon systems, once they go out of control, could pose a serious threat to their own side. Also, AI-guided weapon systems could fall into the hands of terrorists.

And fourth, although AI apps can hasten the decision-making process it is doubtful whether AI can make fast decisions after assessing the consequences of its decisions. This calls for establishing strict international rules and oversight mechanisms to minimize the risks posed by AI-guided weapon systems, and to ensure AI is used for the betterment of the people.

To make sure AI-guided weapon systems are controllable, they should be designed with human-machine collaboration, with humans controlling the process. There should be interfaces and feedback mechanisms to give the operators regular updates on the developments, so timely intervention can be made if and when needed.

As for the countries collaborating to formulate international rules and regulations, they should highlight the importance of global treaties and agreements — similar to the Biological Weapons Convention and the Chemical Weapons Convention — in regulating the use of AI in military applications. Setting technological standards and promoting best practices can help ensure the AI weapon systems meet the safety and ethical requirements.

Besides, AI systems need to be more transparent, and countries need to follow an open and transparent development process, which can be reviewed and verified if and when such a need arises. This will help identify potential problems and make the systems more reliable. And independent auditing firms should be hired to annually audit the manufacturers' accounts and regularly conduct inspections to make sure they are complying with international regulations.

There is also a need to strengthen the ethical and legal frameworks to ensure the application of AI systems for military use aligns with humanitarian and international laws, and to hold the manufacturers of AI weapon systems accountable if they violate the established norms.

More important, control and monitoring mechanisms should be established to ensure human supervision in AI decision-making processes, because only humans can effectively prevent automated systems from going rogue.

Measures should also be taken to strengthen cybersecurity in order to thwart hacking bids and tampering of multi-layered security arrangements such as encryption, authentication, intrusion detection, and emergency response mechanisms and, if need be, quickly restore operations after a cyberattack or system failure.

In short, countries should engage in international cooperation and information sharing in AI military applications to collectively overcome the technological challenges, while taking measures to ensure rules and regulations are not breached, because they could create chaos, leading to misjudgments, wrong decisions which could trigger conflicts. Global efforts should be aimed at reducing the risks of conflict and ensuring the control of AI's military applications is in human hands.

The author is the director at the Laboratory of Human-Machine Interaction and Cognitive Engineering, Beijing University of Posts and Telecommunications.

The views don't necessarily represent those of China Daily.

If you have a specific expertise, or would like to share your thought about our stories, then send us your writings at opinion@chinadaily.com.cn, and comment@chinadaily.com.cn.

 

Most Viewed in 24 Hours
Top
BACK TO THE TOP
English
Copyright 1995 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
License for publishing multimedia online 0108263

Registration Number: 130349
FOLLOW US
主站蜘蛛池模板: 勃利县| 通城县| 郎溪县| 封开县| 浑源县| 锡林郭勒盟| 高雄县| 塔城市| 南和县| 年辖:市辖区| 盐山县| 张家港市| 琼结县| 岗巴县| 山东| 敦化市| 互助| 军事| 蒙自县| 长寿区| 海宁市| 万宁市| 英德市| 独山县| 长春市| 玉田县| 洮南市| 长乐市| 葫芦岛市| 临夏市| 刚察县| 永康市| 志丹县| 大安市| 梧州市| 榆林市| 临汾市| 怀化市| 苏尼特左旗| 土默特左旗| 白玉县| 长子县| 镶黄旗| 饶河县| 张家川| 牟定县| 庆云县| 二连浩特市| 永吉县| 贵溪市| 建昌县| 漯河市| 大荔县| 长子县| 固镇县| 万载县| 潞西市| 神农架林区| 灵丘县| 蒲城县| 正阳县| 休宁县| 武川县| 资兴市| 泰兴市| 烟台市| 兴宁市| 龙陵县| 英山县| 三亚市| 长顺县| 昂仁县| 凤山县| 东安县| 余江县| 江油市| 荥阳市| 新乡市| 黑河市| 买车| 瑞安市| 上林县|