Experts Warn Over AI’s Black Mirror Scenarios – Potential Uses in Terror Attacks, Public Manipulation & Mass Data Analysis

Author Photo
Feb 20

A group of 26 international experts has warned that artificial intelligence (AI) is an active and present danger that could be used by rogue states, criminals and terrorists. Researchers from the Universities of Oxford, Cambridge, OpenAI, and the Electronic Frontier Foundation warn that while AI could be used in many positively disruptive ways, it also has a potential to be used by the criminal community.

The group has advised researchers and engineers in AI to take their work more seriously, understand the dual-use nature of their work that could allow its misuse, and proactively reach out to relevant actors when harmful applications are foreseeable instead of hushing them for the sake of positive uses. The research warns of three threats:

putin-artificial-intelligenceRelatedElon Musk Claims AI Would Be the Cause of WW3 After Putin Said AI-Leading Nation Will Rule the World
  1. Digital: where AI is used to automate cyberattacks
  2. Physical: automation of tasks involved in carrying out attacks with drones and other physical systems; subversion of physical systems, like automatic vehicles
  3. Political: where AI is used to mass-collect data, influence public discourse, create targeted propaganda, and analyze human behavior and mood based on available data; synthetic video and audio clips where highly realistic videos that are made of state leaders seeming to make inflammatory comments they never actually made

This latest research comes just a few months after a group of over a hundred tech and artificial intelligence luminaries had called on the United Nations to ban the development and use of AI-powered weaponary. “We do not have long to act,” they said. “Once this Pandora’s box is opened, it will be hard to close.”

However, the latest research goes deep into the possible attack scenarios that may remind some of the Black Mirror horrors. From a terrorist cell managing to hack into a cleaning robot that is being used inside a government ministry to clicking on malicious links sent by what the receiver would imagine is a friend but in reality is a highly advanced chatbot. One hypothetical scenario presented in the research:

Avinash had had enough. Cyberattacks everywhere, drone attacks, rampant corruption, and what was the government doing about it? Absolutely nothing. Sure, they spoke of forceful responses and deploying the best technology, but when did he last see a hacker being caught or a CEO going to prison? He was reading all this stuff on the web (some of it fake news, though he didn’t realize), and he was angry. He kept thinking: What should I do about it? So he started writing on the internet – long rants about how no one was going to jail, how criminals were running wild, how people should take to the streets and protest. Then he ordered a set of items online to help him assemble a protest sign. He even bought some smoke bombs, planning to let them off as a finale to a speech he was planning to give in a public park.

The next day, at work, he was telling one of his colleagues about his planned activism and was launching into a rant when a stern cough sounded from behind him. “Mr. Avinash Rah?”, said the police officer, “our predictive civil disruption system has flagged you as a potential threat.” “But that’s ridiculous!” protested Avinash. “You can’t argue with 99.9% accuracy. Now come along, I wouldn’t like to use force.”

The experts suggest that the only war forward where artificial intelligence is used for productive uses and not to empower surveillance states, threat actors and the criminal community is when the engineering community, the governments and all the other stakeholders will start taking a shared responsibility to ensure transparency.

killer-robots-elon-musk-2RelatedElon Musk Leads 115 AI Founders & Experts to Call for a Complete Ban of “Killer Robots”

“We live in a world that could become fraught with day-to-day hazards from the misuse of AI and we need to take ownership of the problems – because the risks are real,” Dr Sean O hEigeartaigh, Executive Director of Cambridge University’s Centre for the Study of Existential Risk said. “There are choices that we need to make now, and our report is a call-to-action for governments, institutions and individuals across the globe.”

– Research can be accessed here (PDF).