Police in India Under Fire for Using AI to Stop Crimes Against Women

By Aishwarya Jagani

Privacy advocates are alarmed that authorities in a city in India plan to use artificial intelligence to monitor women’s facial expressions in an effort to stop roadside harassment, arguing the program would raise serious privacy and surveillance issues.

“Use of facial recognition is extremely problematic — especially here, since it is open-ended,” Anushka Jain, of the Internet Freedom Foundation (IFF), said of the program in Lucknow, about 310 miles southeast of New Delhi.

“The use is continuous and not limited to any one area, time or specific incident,” she told Digital Privacy News. “This means that 24-7 surveillance of women who come under the gaze of these cameras will be done.”

In January, Lucknow Police Commissioner D.K. Thakur said that authorities had identified as many as 200 harassment hotspots often visited by women and where most complaints are reported.

India is one of the world’s most dangerous places for women, with a rape occurring every 15 minutes, according to government data.

“The use of these cameras is highly invasive.”

Anushka Jain, Internet Freedom Foundation.

Uttar Pradesh, where Lucknow is located, is the least-safest state, with the highest number of reported crimes against women in 2019, according to news reports.

With about 200 million residents, Uttar Pradesh is the most populous state in India as well as the most populous country subdivision in the world.

Lucknow is the state’s capital city.

“We will set up five AI-based cameras, which will be capable of sending an alert to the nearest police station,” Thakur told reporters in his announcement.

“These cameras will become active as soon as the expressions of a woman in distress change.”

Thakur did not respond to a request for comment from Digital Privacy News.

Intrusive Policing

But privacy and human-rights advocates decried the move, saying the effort would bring intrusive policing and excessive surveillance of women.

They also argued that police provided no clarity over how and where the facial data collected by the system would be stored or who would have access to it — raising fears over potential misuse. 

“The use of these cameras is highly invasive,” IFF’s Jain told Digital Privacy News, “and is violative of the privacy rights of the women who are coming under the gaze of these cameras — who have a valid expectation of privacy, even in public areas.

“This is extremely harmful to privacy, since we do not have any specific regulations with regard to facial recognition in place — nor do we have a strong data-protection law to stop misuse.”

“These cameras will become active as soon as the expressions of a woman in distress change.”

Lucknow Police Commissioner D.K. Thakur.

Jain also cited a 2017 landmark decision by the Supreme Court of India that recognized privacy as a fundamental right guaranteed by the Constitution of India.

“It violates the right to privacy in a public place,” she said, noting that the absence of any national data-privacy law further complicated the issue.

Surveillance Issues

Experts also argued that the Lucknow program could be used for surveillance against women and those living in vulnerable communities — and that it could deprive women of the agency they have in reporting crimes to the police.

Roop Rekha Verma, a women’s rights activist in Lucknow, told reporters when the program was announced that police often turned away women trying to register complaints or failed to take action once they did.

“And they want us to believe they will take action watching our facial expressions?” she asked.

Jain concurred, telling Digital Privacy News: “Here, the problem of the patriarchal state acting as a savior also arises, since these cameras are surveilling and responding without any action being taken by women and without their consent.” 

“Facial-recognition technology has long been used as a tool to help police identify criminals in various countries.”

Swati Phadke, Mumbai privacy professional.

Lucknow authorities apparently have provided no specifics on how any collected data would be secured, critics said, especially since India lacks a national data-protection law.

“Where will the data be stored, in India or outside?” Swati Phadke, a privacy professional in Mumbai, posed to Digital Privacy News. “Who will have access to the data?

“What safeguards will be in place to ensure the safety of the data?” she continued. “Which tech companies are involved?”

Identity Theft?

Grishma Dave, a data-privacy lawyer in Mumbai, raised the specter of identity theft.

“Fake profiles are made on social media platforms using photos,” she said. “The hackers might use the facial-recognition data for forging government documents like Aadhar cards or voting IDs.” 

Referencing data from the Carnegie Endowment for International Peace showing that 64 countries used face tech for surveillance in 2019, Phadke told Digital Privacy News: “Facial-recognition technology has long been used as a tool to help police identify criminals in various countries.

“Whenever AI is used for the larger public, the citizens expect it to be accompanied by proper safeguards and to be linked to a public benefit,” she said.

“Hackers might use the facial-recognition data for forging government documents.”

Grishma Dave, law professional Mumbai.

News reports in 2019 disclosed that Indian authorities used face data from voter identification cards, driving licenses and the Aadhar database to crack down on protesters during the Delhi uprisings over controversial citizenship laws.

Further, facial recognition continues to come under fire globally on issues that include higher misidentification rates for women and people of color — and Jain noted how the AI for the Lucknow program could easily bring false alerts.

“The ways in which this can go wrong are numerous,” she told Digital Privacy News. “Since emotion-tracking and facial recognition are not 100% accurate, faulty technology can lead to false positives.”

Aishwarya Jagani is a writer based in India.