专栏名称: 英文杂志
EnglishMags delivers the most important and interesting stories from around the Internet every morning.
目录
相关文章推荐
英文悦读  ·  【明天开始】2025全新话题写作社群来了 ·  昨天  
清晨朗读会  ·  清晨朗读3174:Millennials: ... ·  3 天前  
清晨朗读会  ·  渊源直播 ·  2 天前  
恶魔奶爸  ·  新的一年,请抓住这次逆天改命的机会! ·  3 天前  
清晨朗读会  ·  渊源直播 ·  6 天前  
51好读  ›  专栏  ›  英文杂志

【Economist】Artificial intelligence: Peering into the black box

英文杂志  · 公众号  · 英语  · 2018-02-25 08:37

正文

中文导读

机器学习作为人工智能的重要分支,逐渐渗透到人类生活的方方面面。虽然机器学习的发展带来了一些弊端,但并非无法解决。人类应保持开放的态度,采取有力措施来降低危害。正如社会采用法律、规范等来弥补人性缺陷,机器也需要进行约束与监督。无论人工智能如何发展,始终要谨记——人类必须充当监管者的角色。

Human beings do not always understand why AIs make choices. Don’t panic



There is an old joke among pilots that says the ideal flight crew is a computer, a pilot and a dog. The computer’s job is to fly the plane. The pilot is there to feed the dog. And the dog’s job is to bite the pilot if he tries to touch the computer.


Handing complicated tasks to computers is not new. But a recent spurt of progress in machine learning, a subfield of artificial intelligence (AI), has enabled computers to tackle many problems which were previously beyond them. The result has been an AI boom, with computers moving into everything from medical diagnosis and insurance to self-driving cars.


There is a snag, though. Machine learning works by giving computers the ability to train themselves, which adapts their programming to the task at hand. People struggle to understand exactly how those self-written programs do what they do . When algorithms are handling trivial tasks, such as playing chess or recommending a film to watch, this “black box” problem can be safely ignored. When they are deciding who gets a loan, whether to grant parole or how to steer a car through a crowded city, it is potentially harmful. And when things go wrong—as, even with the best system, they inevitably will—then customers, regulators and the courts will want to know why.


For some people this is a reason to hold back AI. France’s digital-economy minister, Mounir Mahjoubi, has said that the government should not use any algorithm whose decisions cannot be explained. But that is an overreaction. Despite their futuristic sheen, the difficulties posed by clever computers are not unprecedented. Society already has plenty of experience dealing with problematic black boxes; the most common are called human beings. Adding new ones will pose a challenge, but not an insuperable one. In response to the flaws in humans, society has evolved a series of workable coping mechanisms, called laws, rules and regulations. With a little tinkering, many of these can be applied to machines as well.


Be open-minded


Start with human beings. They are even harder to understand than a computer program. When scientists peer inside their heads, using expensive brain-scanning machines, they cannot make sense of what they see. And although humans can give explanations for their own behaviour, they are not always accurate. It is not just that people lie and dissemble. Even honest humans have only limited access to what is going on in their subconscious mind. The explanations they offer are more like retrospective rationalisations than summaries of all the complex processing their brains are doing. Machine learning itself demonstrates this. If people could explain their own patterns of thought, they could program machines to replicate them directly, instead of having to get them to teach themselves through the trial and error of machine learning.


Away from such lofty philosophy, humans have worked with computers on complex tasks for decades. As well as flying aeroplanes, computers watch bank accounts for fraud and adjudicate insurance claims. One lesson from such applications is that, wherever possible, people should supervise the machines. For all the jokes, pilots are vital in case something happens that is beyond the scope of artificial intelligence. As computers spread, companies and governments should ensure the first line of defence is a real person who can overrule the algorithms if necessary.


Even when people are not “in the loop”, as with an entirely self-driving cars, today’s liability laws can help. Courts may struggle to assign blame when neither an algorithm nor its programmer can properly account for its actions. But it is not necessary to know exactly what went on in a brain—of either the silicon or biological variety—to decide whether an accident could have been avoided. Instead courts can ask the familiar question of whether a different course of action might have reasonably prevented the mistake. If so, liability could fall back onto whoever sold the product or runs the system.





请到「今天看啥」查看全文