Artificial Intelligence will pose several ethical and policy challenges in areas like human rights, according to Eleonore Pauwels, Research Fellow on Emerging Cybertechnologies at United Nations University (UNU). She fears that independent human thought is at risk of being regulated as human societies start relying more on Artificial Intelligence.
Challenges of AI
“At an individual level, AI has already begun to shift our understanding of agency, identity, and privacy. The all-encompassing capture and optimization of our personal information — the quirks that help define who we are and trace the shape of our lives — will increasingly be used for various purposes without our direct knowledge or consent. How to protect independent human thought in an increasingly algorithm-driven world goes beyond the philosophical and is now an urgent and pressing dilemma,” Pauwels said in a statement (UN).
On the plus side, the researcher feels that large-scale data collection combined with AI can open doors for new innovations that could push human societies forward. As an example, Pauwels cites Apple’s “Heart Study” app that used data from Apple Watch to identify irregular heartbeats. The technology potentially allows for the detection of heart conditions like atrial fibrillation early on, ensuring that people get a higher standard of healthcare.
However, AI technologies also pose a serious cause for alarm with their handling of the most intimate human data. Today, AI systems around the world not only collect and analyze data on our shopping patterns, but also dating preferences, genomes, biometrics, behaviors, and many other variables. Eventually, AI programs exposed to such large datasets of individuals might evolve a keen understanding of human beings. As a result, these systems could start intruding in human lives at a greater frequency with the objective of influencing our behavior “for the better.”
Corporations and governments that control such AI systems are another risk. They will basically have the ability to control vast populations depending on how comprehensive a dataset they possess. While businesses use AI to coax the public to buy things they might not need, governments will likely use AI to identify “potential threats” to society and deal with them even before a crime has been committed.
Due to the huge benefits and risks associated with AI, Pauwels feels that creating ethical and policy frameworks on the matter is complicated. To resolve the issue, the university has created an AI and Global Governance Platform, bringing together several policy experts, researchers, and thought leaders to look into the challenges posed by Artificial Intelligence.
A grave problem posed by AI is in the field of autonomous weapons. The UN has been very vocal against implementing AI in weapons as it believes such a move would be catastrophic. “Autonomous machines with the power and discretion to select targets and take lives without human involvement are politically unacceptable, morally repugnant, and should be prohibited by international law,” Antonio Guterres, Secretary General of the United Nations, said in a tweet.
Many Latin American countries have asked for a global prohibition on developing such machines. However, nations like Russia have been strong opponents of the law. Japanese PM Shinzo Abe has declared that his country does not intend to develop any lethal autonomous weapons that do not function without human involvement.
Research group International Data Corporation (IDC) predicts global spending on drones and robotics to jump to over US$201.3 billion by 2022 from about US$95.9 billion in 2018. Entrepreneur Elon Musk has often termed AI the “biggest risk” to human civilization, even saying that the technology could end up triggering another world war.