One thing that has almost brought the whole world to its knees is terrorism. It has not spared any. It doesn’t recognize a developing and a developed country. It’s hitting everybody from the right, left, and center. Unfortunately, however, tech- businesses are trying to model Artificial Intelligence into extremely lethal weapons in the hands of terrorists.
Terrorism can be targeted at the cyber or the real physical world but in whichever form it’s launched, its effect can be very damaging. Lives and property running into billions of dollars are wantonly destroyed when it’s targeted at the physical world, while large sums of money are carted away by these unscrupulous elements when it’s targeted at the cyber world.
When attacks are launched, experiences have shown that they are usually properly articulated and coordinated resulting in huge successes on the part of the attackers. Most times, the harm has been done before you can get to know about it. Little can be done to mitigate the attack in most cases.
A case in point is the 9/11 attacks associated with the Islamic extremist group al-Qaeda, regarded as the deadliest terrorist attacks on American soil in U.S. history. On the part of the cyberterrorist, the WannaCry experience in which 230,000 machines in more than 150 countries fell victim to the attack can’t be forgotten so soon.
But won’t all these look like kid’s gloves if we don’t do something very quickly about weaponizing artificial intelligence?
Take the warning of more than 100 leaders in the industries of artificial intelligence (AI) and robotics for instance, in which they called for safeguards to avoid “destabilizing effects,” and went ahead to say that “once this pandora box is opened, it will be hard to close.”
The pandora box is nothing other than Lethal Autonomous Weapons Systems, or Intelligent Weapons, or better still, Weapons of Terror. We are talking about a type of weapon that can be made to act and behave of its own accord with just you or somebody else giving it a command. You can only imagine the extent of damage that can be caused if this weapon falls into the hands of a despot or terrorist.
The artificial intelligence race is raging on and tech-businesses, are doing everything possible to outwit competitors just as is seen in any other business setting. The time bomb is gently ticking, with you and I probably playing the Russian roulette. The end result is that we are getting ourselves into the stage where machines and not people will determine who lives and who dies.
Noel Sharkey, emeritus professor of artificial intelligence and robotics at the University of Sheffield was reported to have told the Lords Artificial Intelligence Committee that the IS was already using drones as offensive weapons, although they were currently remote controlled by human operators. Without the weapons of terror, the IS were bad enough killing and maiming at will, how more horrifying their actions can become with absolute intelligent weapons is better imagined than experienced.
The world definitely heaved a sigh of relief when the North Korean leader Kim Jong Un announced recently that the country has suspended nuclear and long-range missile tests, with the claims that further tests are unnecessary. We can’t quickly forget the type of pressure that was mounted on North Korea to the extent that the U.N. had to apply stiff sanctions.
In the case of North Korea, we were talking about tests and not really the launching of these missiles, it will absolutely be mind-boggling to consider the kind of terror and atrocities Kim Jong Un and people like him will unleash on the world if they have the capability of producing intelligent weapons.
It’s possible that a few years ago, nobody would have believed that North Korea will have the access to nuclear and long-range missiles, the stark reality that is staring everybody in the face is that it has happened and if they could do it, nothing stops them and any other person with a serious intent from acquiring the technology about lethal autonomous weapons systems.
The Taranis drone, an unmanned combat aerial vehicle is expected to be fully operational by 2030 in the UK, expectedly, the U.S., Russia, China, and all other tech giants will not want to be beaten to the game. While the argument that these intelligent weapons are being put in place to minimize the number of people lost in armed conflicts may sound convincing, the negative impact overwhelmingly outweighs whatever gains they may portend.
It’s a welcome development that the International Committee for Robot Arms Control (ICRAC) has set out to do something about this. We can’t afford to be caught with our pants down, all hands must be on deck if we must avoid the catastrophic outcomes of accidentally allowing these weapons to fall into the wrong hands.
Photo Credit: Strelka Institute photo Flickr via Compfight cc
1 thought on “How Artificial Intelligence(AI) Can Make Terrorism More Terrifying”