AI SAFETY: CHARTING OUT THE HIGH ROAD
This past year, revelations about the plight of Muslim Uighurs in China have come to light, with massive-scale detentions and human rights violations of this ethnic minority of the Chinese population. Last month, additional classified Chinese government cables revealed that this policy of oppression was powered by artificial intelligence (AI): that algorithms fueled by massive data collection of the Chinese population were used to make decisions regarding detention and treatment of individuals. China failing to uphold the fundamental and inalienable human rights of its population is not new, and indeed, tyranny is as old as history. But the Chinese government is harnessing new technology to do wrong more efficiently.
Concerns about how governments can leverage AI also extend to the waging of war. Two major concerns about the application of AI to warfare are ethics (is it the right thing to do?) and safety (will civilians or friendly forces be harmed, or will the use of AI lead to accidental escalation leading to conflict?). With the United States, Russia, and China all signaling that AI is a transformative technology central to their national security strategy, with their militaries planning to move ahead with military applications of AI quickly, should this development raise the same kinds of concerns as China’s use of AI against its own population? In an era where Russia targets hospitals in Syria with airstrikes in blatant violation of international law, and indeed of basic humanity, could AI be used unethically to conduct war crimes more efficiently? Will AI in war endanger innocent civilians as well as protected entities such as hospitals? Was
Read More Here: