3 Ominous AI Trends We Should Be Worried About

Looking at emerging AI trends, there are three specific developments that should be a cause of worry for any sensible human. (Image: Screenshot / YouTube)
Looking at emerging AI trends, there are three specific developments that should be a cause of worry for any sensible human. (Image: Screenshot / YouTube)

Artificial Intelligence is proclaimed as being the cornerstone for establishing a new world order for human beings. Unfortunately, this new world can either be a paradise or turn into our worst nightmare. Looking at emerging AI trends, there are three specific developments that should be a cause of worry for any sensible human.

Unregulated government usage

Governments have started implementing AI in various departments. What’s worrying is that the AI systems are literally being given complete control of tasks with little to no human oversight.

In the state of Indiana, 1 million applications for food stamps, healthcare, and cash benefits were declined in three years. Why? Because the new AI implemented to manage the applications automatically classified any error in the documents as “failure to cooperate,” thereby rejecting them.

Indiana WIC EBT Card 0-43 screenshot

In the state of Indiana, 1 million applications for food stamps, healthcare, and cash benefits were declined in three years because the new AI implemented to manage the applications automatically classified any error in the documents as ‘failure to cooperate,’ thereby rejecting them. (Image: Screenshot / YouTube)

Tomorrow, such systems could be extended to almost all government services, whether it be passport applications or issuing an arrest warrant for tax defaults. Eventually, the administration could become so dependent on the AI system that governance itself might end up being managed exclusively by the machines.

Surveillance and facial recognition

When talking about surveillance, the image of a person watching over live security cam footage comes to mind. AI surveillance is leagues above such a simplistic model. It will have built-in psychological detectors that will analyze a person’s micro-expressions and determine whether they are a public risk or not.

For instance, facial recognition AI can look at movements and predict whether the candidate might be a “potential” terrorist. In the same way, employers might use facial recognition AI to determine whether a person will be a “good employee” or not. AI products already exist that are capable of doing these things.

“How would a person profiled by these systems contest the result? What happens when we rely on black-boxed AI systems to judge the “interior life” or worthiness of human beings? Some of these products cite deeply controversial theories that have long been disputed in the psychological literature, but are being treated by AI startups as fact,” Kate Crawford, founder of the AI Now Institute, said to The Intercept.

China- facial recognition and state control - The Economist 0-8 screenshot

AI surveillance will have built-in psychological detectors that will analyze a person’s micro-expressions and determine whether they are a public risk or not. (Image: Screenshot / YouTube)

Social grading

What will an oppressive regime do with AI? Use it to oppress the people even further, of course. This is what the Chinese Communist Party (CCP) seems to have in mind as it continues to implement the Social Credit System. The system basically classifies people into different groups based on a series of historical actions like debt defaults, community service, legal cases, speaking against the government, and so on.

People with a high score will be provided better services and facilities, like the benefit of skipping lines, lower interest rates, invites into dating groups, etc. On the contrary, those with a low score will be denied even the most basic things in life, like train tickets, entertainment, loans, and so on.

“Under tight biometric surveillance, and using 200 million surveillance cameras, every move from shopping to personal relationships can be monitored using AI… Every observed action can affect a social credit score in real time. You could go out one day with a score of 900, and if you have a few negative encounters with anything or anyone, your score could be shot to pieces by the time you return home,” according to Digital Journal.

The CCP plans on fully implementing the Social Credit System countrywide by 2020. Every individual in China will be effectively brought under complete AI surveillance and control.

Follow us on Twitter or subscribe to our weekly email 

Mass Extinction 252 Million Years Ago: Are We Due for Another One?
Why Huawei Is a Looming Threat to the US
#article-ad-block-->