Monitoring Employees by AI Raises New Class of Privacy Fears
By Victor R. Bradley
The general public is broadly aware that artificial intelligence and increasingly powerful statistical models have given companies the ability to build intrusive customer profiles based on web-surfing behavior.
Less discussed, however, is the power such technology confers upon employers.
This neglected legal and ethical area is becoming increasingly prominent. In February, U.S. House Labor and Education Committee held a hearing: “The Future of Work: Protecting Workers’ Civil Rights in the Digital Age.”
The session investigated the ways algorithms and automated surveillance technology could reproduce and exacerbate existing biases in the workplace.
The explosion in remote work since March because of COVID-19 has made such investigation increasingly necessary, as employers must increasingly rely on supervision via cyberspace.
Daniel Aranki is executive director of the Berkeley Telemonitoring Project, which builds tools to help healthcare practitioners develop applications for remotely monitoring patients. He holds a doctorate from the University of California-Berkeley.
Aranki told Digital Privacy News that the potential for intrusion was rooted in the basic architecture of this technology and was inadequately addressed by existing laws.
What is “inference threat”?
Inference threat refers to the ability of AI systems to infer undisclosed pieces of information about individuals, with high accuracy, from seemingly innocuous data.
Frequently, the data that makes these inferences possible has been disclosed for other purposes than the threatening inference.
Why is this something to worry about for anyone seeking to use AI to process and address large amounts of data?
Artificial Intelligence systems have an astute ability to recognize latent patterns in data.
In some cases, these patterns correlate with certain features that are desired to be predicted or inferred.
For example, imagine that an individual’s respiratory patterns and cardio-related vital signs can be used to infer whether the individual smokes marijuana or not — and at what frequency he or she does so (say, using an AI inference system).
This type of inference would be a threat to one’s privacy.
To see this, imagine that a hospital provides a home-monitoring system to patients with risk of congestive heart failure, which measures their respiratory patterns and vital signs related to cardiovascular function?
The goal of this system is to utilize the collected data in order to assess the risk of cardiac arrest for these individuals.
However, this data can also be used to infer each individual’s habit regarding smoking marijuana, which may be viewed as invasive by these individuals.
Imagine that one’s employer or insurance provider gets access to the medical data — and is able to make this inference?
Do individuals currently have any practical protections vs. legal ones from problematic inferences based on AI?
Unfortunately, there is currently no satisfactory and complete way for individuals to protect themselves from the general inference threat, but some tools do exist to empower individuals in certain related areas.
For example, the General Data Protection Regulation (GDPR) in Europe provides the right for individuals to be notified when AI-involved decisions — e.g., profiling — were made in their regard, and the right for individuals to request human intervention, with some exceptions, whenever decisions were reached in an automated way about them.
While this is certainly a step in the right direction in the effort to protect the privacy of individuals, these protections have not been adopted in more recent laws such as the California Consumer Privacy Act (CCPA).
Even the framework set forth by the GDPR does not fully cover individuals in cases other than AI-based decision making, such as general inferences.
How do data scientists and engineers ensure their AI models for analyzing and predicting behavior are accurate and reliable?
Statistical hypothesis testing.
To describe the general process in a simplified way, you’d start by posing a hypothesis that you’d like to test its validity.
Here’s an example hypothesis: blocking access to social media websites at work increases employee productivity.
The second step is to design a study or an experiment that would estimate the effect (productivity) of this treatment (blocking social media websites) in a manner that controls for confounding variables.
The field concerned with designing these experiments is called (controlled) study design.
Depending on the study design and the way the outcome is measured (productivity), a statistical test is employed that yields a quantity estimating the true size of the effect, referred to as the test statistic, and a probability estimate of the chance that the results were obtained by chance if the original hypothesis is actually false (called “p-value”).
This may sound like a twisted way to perform the test, but there are good scientific reasons for doing it this way.
What are they?
The p-value is an estimate of the Type I error (false-positive chance).
Furthermore, the hypothesis test experiment designed for the desired hypothesis should be independently repeated to increase confidence in the results and to ensure reproducibility.
This process is similar to that of conducting medical clinical trials to validate the efficacy of medical interventions and treatments (e.g., pharmaceutical drugs).
What are the dangers of AI systems and inferences?
For starters, AI engineers need to make sure that the results of their AI systems are generalizable.
That is, the inferences made by AI systems need to be applicable to individuals (or in general, data points) that were not part of the original dataset used to (build) these AI systems.
There are many factors that can cause this problem: non-representative sample or data used in training these AI systems (e.g., using data from one demographic only to train the system), and a phenomenon called “overfitting,” which occurs when the system works well on the original data that are used to train it, but fails to perform as well when novel, previously unseen, data points are used to test it.
One potential pitfall of AI systems and inferences, particularly when decisions are made — completely or in part — based on these inferences, is failing to ensure fairness and lack of bias.
For example, consider an algorithm designed to identify and flag online video content that encourages violence.
How do you ensure that the algorithm doesn’t unproportionally censor a particular demographic more than others?
How do you balance the decisions made by the algorithm with principles such as freedom of expression?
This pitfall is more involved to identify and correct in many cases (evident by the ongoing debate about “policing” online content, such as Twitter and Facebook posts).
Partly, the mechanisms described in the previous question can help confirm some of these biases — and test for ways to correct for them.
These tools, however, are not sufficient to tackle this important problem on their own.
How are AI-driven behavioral analytics used in employee-monitoring software?
It is a fairly standard practice for companies and organizations to monitor their network traffic.
This is primarily done to detect any violations of computer or network usage policies and to throttle potential cyber attacks on the network. In these practices, automated analyses of network patterns and data are often employed.
In some cases, these tools also incorporate usage behavioral analyses of the individuals using the network (employees or guests).
One of the reasons for such behavioral analyses is to assess and monitor the productivity of employees (e.g., how much time they spend on each project or milestone). The standards and levels of network monitoring and analyses differ from one place to another.
The concerns stemming from such monitoring are major.
Some of the more obvious concerns of these monitoring techniques include privacy and limiting the employees’ ability to self-organize or unionize (e.g., the alleged union-busting at Google late last year).
However, some of the more subtle issues that may arise from the use of such mechanisms include, but not limited to, bias in decision-making, inadvertently reaching outcomes that are opposite to the desired ones, legal liability for the employer.
Can you offer an example?
For example, if salary increases, promotions or bonuses are decided, in part, based on the results of network-monitoring-based productivity analysis, one may run into a situation where Campbell’s law (the longer a quantitative indicator is used to measure social processes, the less reliable that indicator becomes, as people try to outsmart it) kicks in.
That overall productivity ultimately decreases with time, instead of increasing.
Additionally, the AI-based systems used to determine promotions, salary increases and bonuses may inadvertently suffer from bias — resulting in unfair decisions based on factors that are not relevant to performance, such as gender or other demographic variables.
Victor R. Bradley is a writer in Clarksville, Tenn.
Sources:
- Berkeley Telemonitoring Project: The Berkeley Telemonitoring Project – Privacy-Aware Health Telemonitoring
- The New York Times Magazine: How Companies Learn Your Secrets
- 116th United States Congress “The Future of Work: Protecting Workers’ Civil Rights in the Digital Age”: “The Future of Work: Protecting Workers’ Civil Rights in the Digital Age”