Letter from the Editor #7: The Stanley-Penre Interview Now on WesPenre.com!

Posted by Wes Penre on Saturday, September 3, 2016 @ 12:45 PM

Rather than updating Letter #6, I’ll write a new one instead.

The interview Robert Stanley at UNICUS did with me some weeks ago is now published on my website. Here is the html version: http://wespenre.com/Interviews/Interview_no.1__Insightful_Interview__Wes_Penre_Interviewed_by_Robert_Stanley__9-2-16.htm, and here is the PDF version for those who want to download: http://wespenre.com/Interviews/PDF/Interview_no.1__Insightful_Interview__Wes_Penre_Interviewed_by_Robert_Stanley__9-2-16.pdf

Hope you’ll enjoy!
Wes

Advertisements

One thought on “Letter from the Editor #7: The Stanley-Penre Interview Now on WesPenre.com!

  1. / http://www.businesslive.co.za/rdm/technology/2017-09-20-peter-apps-robots-going-to-war-is-no-longer-science-fiction/ SCIENCE & TECHNOLOGY

    PETER APPS: Robots going to war is no longer science fiction’Already, drones are able to fly themselves independently so they can stay airborne if they lose touch with human pilots. Soon they may be able to make their own tactical decisions’ 20 SEPTEMBER 2017 – 08:03 PETER APPS [image: A killer robot from the Terminator movies. Picture: SUPPLIED] A killer robot from the Terminator movies. Picture: SUPPLIED

    Russia’s latest “Zapad” military exercise is underway on NATO’S eastern border. Tens of thousands of soldiers are taking part in the massive four-yearly war games that are both a drill as well as a show of strength for the West. Next time around, in 2021, those troops might be sharing their battle space with a different type of force: self-driving drones, tanks, ships and submersibles.

    Tech entrepreneur Elon Musk has long warned that humanity might be on the verge of some cataclysmic errors when it comes to artificial intelligence

    Drone warfare is hardly new – the first lethal attack conducted by an American unmanned aerial vehicle took place in Afghanistan in October 2001. What is now changing fast, however, is the ability of such unmanned systems to operate without a guiding human hand.

    That’s a truly revolutionary shift – and one every major nation wants to lead. Critics have long feared countries might be more willing to go to war with unmanned systems. Now, some see a very real risk control might pass beyond human beings altogether. ADVERTISING inRead invented by Teads

    Tech entrepreneur Elon Musk has long warned that humanity might be on the verge of some cataclysmic errors when it comes to artificial intelligence. Last month, he ramped that up with a warning that the development of autonomous weapons platforms might provoke a potentially devastating arms race.

    As if to reinforce Musk’s point, Russian President Vladimir Putin told students shortly thereafter that he believed the technology would be a game changer, making it clear Russia would plow resources into it. “The one who becomes leader in this will become ruler of the world,” Putin was quoted as saying. [image: Russian President Vladimir Putin, flanked by senior Russian officials, watches the Zapad-2017 war games in the Leningrad region, Russia September 18, 2017. Picture: Sputnik/Mikhail Klimentyev/Kremlin via REUTERS] Russian President Vladimir Putin, flanked by senior Russian officials, watches the Zapad-2017 war games in the Leningrad region, Russia September 18, 2017. Picture: Sputnik/Mikhail Klimentyev/Kremlin via REUTERS

    China too is pushing ahead, believed by some experts to now be the global leader when it comes to developing autonomous swarms of drones.

    Already, drones are able to fly themselves independently so they can stay airborne if they lose touch with human pilots. Soon they may be able to make their own tactical decisions. At Georgia Tech in the United States this summer, researchers programmed swarms of light drones to fight their own aerial dogfights. The U.S. military is trying out similar products.

    That means one operator could command many, many more drones – or that they might not need direct supervision at all. [image: Hitachi has developed artificial intelligence which can track and identify ‘suspects’. Picture: REUTERS] Hitachi has developed artificial intelligence which can track and identify ‘suspects’. Picture: REUTERS

    Even more important than what is happening in robotics may be the wider developments in artificial intelligence. That won’t make warfare necessarily more deadly – a bomb dropped from a drone is not in itself less lethal than one from a manned aircraft. While it’s possible that greater accuracy might reduce casualties, some analysts fear that the changes brought by new unmanned systems might themselves fuel new conflicts.

    “Radical technological change begets radical government policy ideas,” concluded a July report on the topic produced for the U.S. intelligence community by Harvard University’s Belfer Center. It warned an “inevitable” AI arms race could prove as revolutionary as the invention of nuclear weapons.

    Artificial intelligence could dramatically increase the efficiency of surveillance technology, allowing a single system to monitor perhaps millions of digital conversations, hacked personal devices and other sources of information. The implications could be terrifying, particularly in the hands of a state with little or no democratic oversight.

    At a recent UK panel discussion, Britain’s former Special Forces director Lieutenant General Sir Graeme Lamb predicted that by 2030, technological breakthroughs – not just AI, but quantum computing and beyond – would produce entirely unpredictable changes. Special force teams, he suggested, might well have a robotic and artificial intelligence component deployed alongside them – the U.S. Army calls this “manned-unmanned teaming.” [image: Russia’s FEDOR robot, which can fire weapons is demonstrated. Picture: SUPPLIED] Russia’s FEDOR robot, which can fire weapons is demonstrated. Picture: SUPPLIED

    That sounds like something out of science fiction – and it might well look like it. Last year, Russia unveiled its FEDOR humanoid military robot, which it demonstrated firing a gun.

    Most countries deliberately keep their defense AI secret, ultimately fueling the arms race Musk was warning about. Some scientists already worry about a real-world version of the premise for the Arnold Schwarzenegger-starring “Terminator” film franchise in which the United States, fearing cyber attack, hands control of key military systems to the artificial intelligence Skynet. (Skynet, fearing its human creators might choose to turn it off, immediately launches a full-scale nuclear attack on humanity.)

    For now, Western nations at least look keen to keep a human in the “kill chain.” Not all countries may make that choice, however. Russia has long had a reputation for trusting machines more than people, at one stage considering – and, some evidence suggests, building – an automated system to launch its nuclear arsenal should its command structure be destroyed by a first strike.

    Britain’s new aircraft carrier has only a fraction of the sailors on its only slightly larger U.S. carrier counterparts, relying heavily on automatic systems to manage weaponry and damage control

    Outside of the military, there is evidence AI algorithms have already alarmed their creators. In August, Facebook shut down an AI experiment after programs involved began communicating with each other in a language the humans monitoring them could not understand.

    Is this the end for ordinary human soldiering? Almost certainly not. It’s even been argued that a more complex, high-tech battlefield might require more soldiers, not fewer.

    Robotic systems may be vulnerable to hacking, jamming and simply rendered inoperable through electronic warfare. Such techniques allow U.S.-led forces in Iraq to largely negate the off-the-shelf drones being used by Islamic State. Russia used similar techniques against Western-made drones in Ukraine.

    That’s a worry for armed forces betting – like many industries – on automation. Britain’s new aircraft carrier has only a fraction of the sailors on its only slightly larger U.S. carrier counterparts, relying heavily on automatic systems to manage weaponry and damage control. The latest Russian tank, the T-14 Armata, has an automated turret that will usually be out of reach to most of its crew. Such techniques have clear advantages – but also mean that interfering with electronics could leave them useless.

    Such technology is coming whether it is a good idea or not. Indeed, even relatively old military equipment increasingly can be retrofitted. Russian engineers have already demonstrated that they can adapt the 20-year-old T-90 tank to be controlled remotely.

    Ironically, the North Korean crisis reminds us that the most dangerous technologies smay well remain those invented more than 70 years ago – atomic weapons and the missiles that carry them. Even if mankind can avoid a nuclear apocalypse, however, the coming AI and robotic revolution may prove an equal existential challenge.

    – Reuters

    On 3 September 2016 at 21:44, Synthetic Super-Intelligence and the Singularity wrote:

    > Wes Penre posted: “Posted by Wes Penre on Saturday, September 3, 2016 @ > 12:45 PM Rather than updating Letter #6, I’ll write a new one instead. The > interview Robert Stanley at UNICUS did with me some weeks ago is now > published on my website. Here is the html version: h” >

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s