Allowing machines to select and target humans sounds like something out of an apocalyptic sci-fi movie. But as we enter another decade, it is becoming increasingly obvious that we’re teetering on the edge of that dangerous threshold.
Countries including China, Israel, South Korea, Russia and the United States are already developing and deploying precursors to fully autonomous weapons, such as armed drones that are piloted remotely. These countries are investing heavily in military applications of artificial intelligence with the goal of gaining a technological advantage in next-generation preparedness for the battlefield.
These killer robots, once activated, would select and engage targets without further human intervention. The United States and other countries developing them are trying to prevent progress toward an international treaty to ban them and retain meaningful human control over the use of force. They call efforts to regulate these weapons premature, and question concerns that deploying them will threaten the right to life and principles of human dignity.
Killer robots a top existential threat
Recognizing that, the momentum to prevent a future of killer robots intensifies. Killer robots are now seen as one of the top existential threats faced by the planet. A growing number of countries and some unlikely allies are now backing the drive for a new treaty to prohibit lethal autonomous weapons systems. As Nobel Peace Laureate Jody Williams warns, such weapons would cross “a moral and ethical Rubicon.”