Preventing Mass Shootings vs. Invading Privacy
By Mary Pieper
Will an artificial intelligence system eventually be created that collects vast amounts of data on almost everyone in the United States to predict who is likely to become an active shooter?
The technology exists to make this system possible, said Jeffrey J. Blatt, a California tech lawyer and founder of X Ventures, which represents clients in international technology and related issues, during recent IT security conference in San Francisco.
However, society as a whole must decide if potentially saving hundreds of lives is worth such a huge invasion of privacy, he said.
Do you think it’s inevitable that this kind of AI predictive system will be possible?
As we look outside the United States, it’s inevitable. In non-Western countries like China, I think there’s no question that the systems are actually underway.
In my discussion with an Israeli artificial intelligence expert, while he could not confirm or deny, it seems apparent that Israel has made progress in terms of the development of a system that could theoretically predict the behaviors of individuals.
How could this system invade individual privacy?
The premise of my thought experiment was if we knew about everything you could know about somebody — and you map that onto known behavioral-threat indicators (for a mass shooter) — and when you reach a certain threshold, an automated system could flag them.
Then, a team of humans would become involved and decide whether this particular person required some sort of intervention.
This system would require information from social-media posts, electronic communications such as e-mails or messages, web-search histories, medical histories — including psychological histories — financial transactions, school-behavior records, criminal-justice history and police contacts.
It can become extremely intrusive.
Why?
It’s quite difficult in advance to take somebody out of the potential pool of being an active shooter without impacting the accuracy of the system, so that’s what makes it so intrusive.
In the wake of COVID-19, governments are collecting more data than ever. Does this increase your concerns about the creation of the AI system you describe?
Society is apparently willing to accept less privacy in favor of potentially more security during a national or global emergency, but the probability of a mass shooting is rather small and the trade-off in privacy is rather large.
My sense is it would not be socially acceptable compared to something like COVID-19, where the impact is significant and people are more willing to accept the loss of privacy, at least for a while.
Mary Pieper is a writer based in Iowa.
Source:
RSA Conference, Pre-Crime Detection of Active Shooters Using Predictive Policing Systems