Stephen Hawking calls for creation of world government to meet AI challenges

Stephen Hawking calls for creation of world government to meet AI challenges

In a book that’s become the darling of many a — Sapiens: A Brief History of Humankind — the historian Yuval Harari paints a picture of humanity’s inexorable march towards ever greater forms of collectivization. From the tribal clans of pre-history, people gathered to create city-states, then nations, and finally empires. While certain recent political trends, namely Brexit and the nativism of Donald Trump would seem to belie this trend, now another luminary of academia has added his voice to the chorus calling for stronger forms of world government. Far from citing some ancient historical trends though, Stephen Hawking points to as a defining reason for needing stronger forms of globally enforced cooperation.

It’s facile to dismiss Stephen Hawking as another scientist poking his nose into problems more germane to politics than physics. Or even to suggest he is being alarmist, as many AI experts have already done. It’s worth taking his point seriously, though, and weighing the evidence to see if there’s any merit to the cautionary note he rings.

Let’s first take the case made by the naysayers who claim we are a long time away from AI posing any real threat to humanity. These are often the same people who suggest are sufficient to ensure ethical behavior from machines – never mind that the whole thrust of Asimov’s stories is to demonstrate how things can go terribly wrong despite of the three laws of robots. Leaving that aside, it’s exceedingly difficult to keep pace with the breakneck pace of research in AI and robotics. One may be an expert in a small domain of AI or robotics, say pneumatic actuators, and have no clue what is going on in reinforcement learning. This tends to be the rule rather than the exception among experts, since their very expertise tends to confine them to a narrow field of endeavor.

As a tech journalist covering AI and robotics on a more or less full-time basis, I can cite many recent developments that justify Mr. Hawking’s concern – namely the advent of , , and a poker playing AI that resembles a , to highlight just a few. Adding to this, it’s increasingly clear there’s already something of an AI arms race underway, with China and the United States pouring that can support the ever-hungry algorithms underpinning today’s cutting-edge AI.

And this is just the tip of the iceberg, thanks to the larger and more nebulous threat poised by – that is an algorithm or collection of them that achieved a singleton, in any of the three domains of intelligence outlined by Nick Bostrom in Superintelligence: paths, dangers and strategies – those being Speed, Quality/Strategic planning, and Collective intelligence.

The dangers poised to humanity by AI, being somewhat more difficult to conceptualize than atomic weapons since they don’t involve dramatic mushroom clouds or panicked basement drills, are all the more pernicious. Even the so called “Utopian” scenario, in which , would bring with it a concomitant set of challenges that could best be met by stronger and more global government entities. In this light, it seems if anything, Dr. Hawking has understated the case for taking action at a global level, to ensure the transition into an “AI-first world” is a smooth rather than apocalyptic one.

Facebook Twitter Google+ Pinterest
Tel. 619-537-8820

Email. This email address is being protected from spambots. You need JavaScript enabled to view it.