Truth, Inspiration, Hope.

Nobody’s Business: Surveillance Startup’s Innovation Warrants Security, AI, and Privacy Concerns

Published: February 27, 2022
Bosch surveillance camera startup Azena's innovating Android-derivative operating system and open developer app store raises concerns about security, privacy, and artificial intelligence.
A security camera is covered with ice on a building also covered in ice on Feb. 11, 2021 in Louisville, Kentucky. Azena, a subsidiary of Bosch, created an innovative security camera platform with an app store, but concerns around artificial intelligence, security, and privacy are still paramount. (Image: Jon Cherry/Getty Images)

Technology startup Azena has ambitions to make camera surveillance more efficient through the use of artificial intelligence as it seeks to monopolize the security operating system market by implementing consumer-friendly software that reimagines the possibilities of how classical video surveillance functions. 

The only problem? It appears that no one is surveilling the surveillance. 

Security and AI markets are both rapidly growing worldwide. In 2019, the Wall Street Journal predicted that one billion security cameras would be in service by 2021, half of them situated in China. Azena seeks to capitalize on this trend by implementing an operating system that allows consumers to install “smart” security applications, which promise unique offerings such as the ability to count crowds, blur transaction terminals, recognize face mask usage, and even identify emotions. 

Yet, while Azena’s ideas show potential to shift the surveillance marketplace, skeptics question whether their efforts cause more problems than they solve. A recent article by The Intercept uncovered the lofty ambitions of the company and the pitfalls of its business strategy, noting that their platform for software development leaves room for security breaches, criminal activity, and legal headaches.

Additionally, their integration of security technology with AI was critiqued for opening the door to larger issues such as human rights and privacy concerns, which have been growing throughout the Coronavirus Disease 2019 (COVID-19) pandemic.

A multinationally-funded history

Azena is owned and 100 percent funded by Bosch, a multinational corporation founded and based in Germany. Bosch has a history of technological innovation that started before the turn of the 20th century; their website features a history section spanning from the era of early automobile development to its modern-day equivalent of automated cars. 

Now known for their dishwashers, ovens, and refrigerators, Bosch no longer focuses on weaponry as they once did in the World War era. Their historic legacy is contrasted by Azena’s futurity; the smaller company’s partnership with the multinational suggests long-term aspirations in connecting AI with Bosch’s home technology via the Internet of Things (IoT).


Azena was only recently founded in 2018, and has already grown to over one hundred employees, three global locations, and more than fifty business partners. Their operating system, a modified version of Android, is open to over six million developers. The applications Azena hosts can connect with 15 different camera types created by six different partners. 

Security Systems News (SSN) recently noted their corporate growth last September, commenting on various companies that had utilized Azena with apparent success, and mentioning their partnership with Systems Integrator Prosegur, one of the largest global security companies.

Privacy & security concerns

Azena’s website characterizes their applications as “groundbreaking,” and claims they can “transform the potential of security cameras.” The Intercept’s investigation poked at these promises, noting that many novel ideas—such as being able to detect emotion, identify “suspicious” behavior, or correctly assume age—typically do not function as accurately as consumers may assume. 

According to the outlet’s interviewee Gemma Galdon Clavell, technologist and director of the Eticas Foundation, many of these applications are based on what she describes as “junk science.” 

Clavell stated, “From what I’ve seen, it basically doesn’t work.”

Other issues include the question of liability in case of hacking or criminal activity. While Azena’s platform hosts content created by independent application developers, the company is neither legally responsible for any nefariousness that occurs through these applications, nor does it perform more than a cursory examination of app submissions. 

Azena’s modeling is based off of the popular Google Play Store, which holds similar legal regulations to Azena; however, the Intercept notes that companies like Google have a more robust monitoring regime in their walled garden compared to Azena.

A spokesperson admitted in comment to The Intercept that the company “doesn’t have the ability to check how their cameras are used and doesn’t verify whether applications sold on their store are legal or in compliance with developer and user agreements.”

Even as security risks abound, Azena argues that because the applications hosted on their infrastructure are owned by developers, it is ultimately the developers that are responsible for breaking laws or engaging in criminal activity. In Azena’s world, the ethics of the applications they host appear to be none of their concern.

Government surveillance and overreach

Azena’s open marketing of facial recognition technology will likely be exploited by corporations and governments alike. The Verge reported in 2019 that while cameras in regions such as the U.S. are utilized in areas like retail stores, other places such as China utilize surveillance to watch over cities and urban centers; the Chinese government notably utilized facial recognition software to track members of Muslim minority groups to detain them in forced work camps. 

A culture of surveillance has also been motivated by the effects of COVID-19. The CEO of Azena proudly mentioned in an interview with Security Journal UK, before the company rebranded from the name Security & Safety Things, that while other companies struggled with the effects of the crisis, his company provided the first face mask detection application within two weeks of the pandemic beginning. 

Concerns around the weaponization of data mining and facial recognition technologies have been well-founded. A 2019 paper featured in the Carnegie Endowment for International Peace noted that “there is a strong relationship between a country’s military expenditures and a government’s use of AI surveillance systems,” as approximately four fifths of the world’s countries with the highest military budgets also utilized AI surveillance.

Additionally, the European Union recently drafted the first Artificial Intelligence Act to try and manage the risks associated with AI.  

Facial recognition & data mining

Many of the facial recognition apps on Azena’s website are also concerning, as the company predicts they will utilize surveillance footage to train video AI algorithms. According to The Intercept, “The company’s online portal for developers states that camera users ‘may contribute to enhancements via crowd-generated data.’”

While Azena’s apparent motivation is in improving security and data collection methods, the integration of video surveillance software with AI facial recognition technology stirs the potential for the misuse or exploitation of such technology by governments and corporations, as well as those wishing to utilize the data to develop synthetic faces via AI.

A recent study published Feb. 22 in the Proceedings of the National Academy of Sciences of the United States of America (PNAS) noted that the average human was unable to discern between image captures of real human faces and those generated by computers. 

These fake profiles are more commonly known as deep fakes. Deep fakes are created by dueling algorithms, one of which creates a real or false image while the other tries to identify the authenticity of the face. As the programs evolve, deep fakes become increasingly difficult to distinguish.


Tom Cruise TikTok Deep Fakes Are Foreshadowing a Black Mirror Future

The study took into account the effect of race and gender, but noted that “the realism of synthetic faces extends across race and gender.”

Disturbingly, researchers concluded that “synthetically generated faces are not just highly photorealistic, they are nearly indistinguishable from real faces and are judged more trustworthy.” 

But what does this have to do with Azena, which mainly observes facial data in effort to “teach” their programs how to better gauge facial recognition? Writers of the study conclude their findings by stating that “because it is the democratization of access to this powerful technology that poses the most significant threat, we also encourage reconsideration of the often laissez-faire approach to the public and unrestricted releasing of code for anyone to incorporate into any application.” [emphasis added].

In other words, allowing anyone to access security, data, facial recognition, and data manipulation technology could lead to not only issues of privacy, but those of propaganda and misinformation. 

The surveillance industry, propelled by corporate and governmental motives alike, will likely be a cause for contention in coming years. However, the question of who owns the surveillance data and who takes the responsibility for malicious software is still yet to be decided. While Azena makes money off of applications in their marketplace, it appears the online space, for now, is still a neutral ground. In other words the land of security footage really is nobody’s business.