Tuesday , 23 April 2024
Home » Science » Stephen Hawking : AI could be the ‘worst thing ever for humanity’
Stephen Hawking : AI could be the ‘worst thing ever for humanity’
Stephen Hawking : AI could be the 'worst thing ever for humanity'

Stephen Hawking : AI could be the ‘worst thing ever for humanity’

Stephen Hawking has warned that artificial intelligence has the potential to be the downfall of mankind.

The physicist has written an article in the Independent warning about an uncertain future where technology learns to control itself.

Discussing Jonny Depp’s latest film Transcendence, which delves into a world where computers can surpass the abilities of humans, Hawking said dismissing the film as science fiction could be the “worst mistake in history”.

Writing in The Independent, Hawking readily acknowledges the good that comes from such technological advancements:

Recent landmarks such as self-driving cars, a computer winning at “Jeopardy!,” and the digital personal assistants Siri, Google Now, and Cortana are merely symptoms of an IT arms race fuelled by unprecedented investments and building on an increasingly mature theoretical foundation.

But he keeps the negatives close to mind, writing that “such achievements will probably pale against what the coming decades will bring”:

One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

A scientist named Steve Omohundro recently wrote a paper that identifies six different types of “evil” artificially intelligent systems and lays out three ways to stop them. Those three ways are:

To prevent harmful AI systems from being created in the first place. We’re not yet at the point where malicious AI is being created. Careful programming with a Hippocratic emphasis (“First, do no harm.”) will become increasingly important as AI technologies improve

To detect malicious AI early in its life before it acquires too many resources. This is a matter of simply paying close attention to an autonomous system and shutting it down when it becomes clear that it’s up to no good.

To identify malicious AI after it’s already acquired lots of resources. This quickly approaches sci-fi nightmare territory, and it might be too late at this point.

Agencies/Canadajournal




  • About News

    Web articles – via partners/network co-ordinators. This website and its contents are the exclusive property of ANGA Media Corporation . We appreciate your feedback and respond to every request. Please fill in the form or send us email to: [email protected]

    Leave a Reply