男女羞羞视频在线观看,国产精品黄色免费,麻豆91在线视频,美女被羞羞免费软件下载,国产的一级片,亚洲熟色妇,天天操夜夜摸,一区二区三区在线电影
Global EditionASIA 中文雙語Fran?ais
Opinion
Home / Opinion / Op-Ed Contributors

We need a precautionary approach to AI

By Maciej Kuziemski | China Daily | Updated: 2018-05-08 07:06
Share
Share - WeChat
Song Chen/China Daily

For policymakers in any country, the best way to make decisions is to base them on evidence, however imperfect the available data may be. But what should leaders do when facts are scarce or nonexistent? That is the quandary facing those who have to grapple with the fallout of "advanced predictive algorithms"-the binary building blocks of machine learning and artificial intelligence (AI).

In academic circles, AI-minded scholars are either "singularitarians" or "presentists". Singularitarians generally argue that while AI technologies pose an existential threat to humanity, the benefits outweigh the costs. But although this group includes many tech luminaries and attracts significant funding, its academic output has so far failed to prove their calculus convincingly.

On the other side, presentists tend to focus on the fairness, accountability, and transparency of new technologies. They are concerned, for example, with how automation will affect the labor market. But here, too, the research has been unpersuasive. For example, MIT Technology Review recently compared the findings of 19 major studies examining predicted job losses, and found that forecasts for the number of globally "destroyed" jobs vary from 1.8 million to 2 billion.

Simply put, there is no "serviceable truth" to either side of this debate. When predictions of AI's impact range from minor job-market disruptions to human extinction, clearly the world needs a new framework to analyze and manage the coming technological disruption.

But every so often, a "post-normal" scientific puzzle emerges, something philosophers Silvio Funtowicz and Jerome Ravetz first defined in 1993 as a problem "where facts are uncertain, values in dispute, stakes high, and decisions urgent". For these challenges, of which AI is one, policy cannot afford to wait for science to catch up.

At the moment, most AI policymaking occurs in the "Global North", which de-emphasizes the concerns of less-developed countries and makes it harder to govern dual-use technologies. Worse, policymakers often fail to consider the potential environmental impact, and focus almost exclusively on the anthropogenic effects of automation, robotics and machines.

The precautionary principle is not without its detractors, though. While its merits have been debated for years, we need to accept that the lack of evidence of harm is not the same thing as evidence of lack of harm.

For starters, applying the precautionary principle to the context of AI would help rebalance the global policy discussion, giving weaker voices more influence in debates that are currently monopolized by corporate interests. Decision-making would also be more inclusive and deliberative, and produce solutions that more closely reflected societal needs. The Institute of Electrical and Electronics Engineers, and The Future Society at Harvard's Kennedy School of Government are already spearheading work in this participatory spirit. Additional professional organizations and research centers should follow suit.

Moreover, by applying the precautionary principle, governance bodies could shift the burden of responsibility to the creators of algorithms. A requirement of explainability of algorithmic decision-making can change incentives, prevent "blackboxing", help make business decisions more transparent, and allow the public sector to catch up with the private sector in technology development. And, by forcing tech companies and governments to identify and consider multiple options, the precautionary principle would bring to the fore neglected issues, like environmental impact.

Rarely is science in a position to help manage an innovation long before the consequences of that innovation are available for study. But, in the context of algorithms, machine learning, and AI, humanity cannot afford to wait. The beauty of the precautionary principle lies not only in its grounding in international public law, but also in its track record as a framework for managing innovation in myriad scientific contexts. We should embrace it before the benefits of progress are unevenly distributed, or, worse, irreversible harm has been done.

The author is a policy fellow at the School of Transnational Governance at the European University Institute.
Project Syndicate

Most Viewed in 24 Hours
Top
BACK TO THE TOP
English
Copyright 1995 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
License for publishing multimedia online 0108263

Registration Number: 130349
FOLLOW US
主站蜘蛛池模板: 日喀则市| 铜川市| 博罗县| 定襄县| 香格里拉县| 尼玛县| 五峰| 镇江市| 禹州市| 乐平市| 平顺县| 新津县| 淮安市| 贡觉县| 辽源市| 措美县| 枞阳县| 湟中县| 铅山县| 五华县| 南宫市| 翼城县| 辽阳市| 五常市| 孝义市| 青神县| 贡觉县| 沂南县| 拉孜县| 花垣县| 安乡县| 工布江达县| 丰宁| 婺源县| 银川市| 绥棱县| 罗江县| 平顶山市| 襄城县| 施秉县| 十堰市| 子洲县| 彝良县| 镇赉县| 威远县| 伊宁县| 夏邑县| 灵宝市| 旌德县| 通河县| 富民县| 华坪县| 博湖县| 乌什县| 三门县| 蕲春县| 莱州市| 连平县| 寻乌县| 兴隆县| 炉霍县| 延庆县| 施秉县| 安仁县| 万源市| 唐山市| 七台河市| 阳春市| 萍乡市| 隆回县| 高清| 西峡县| 玉山县| 喀喇沁旗| 安庆市| 贵溪市| 阳新县| 西乡县| 平陆县| 平乐县| 双流县| 全南县|