‘We’re Never Clear About the Data That’s Being Gathered’
By C.J. Thompson
Guarding private information is only getting tougher.
The lack of federal data-privacy legislation, combined with the ramifications of the intensifying pandemic, is increasing the entry points for compromising data.
Kristin Johnson, Asa Griggs Candler professor of law at the Emory University Law School, told Digital Privacy News that more public vigilance was needed.
She argues in a soon-to-be-published academic paper — “Regulating Digital Surveillance: Protecting Privacy in a Pandemic” — that when it comes to privacy intrusion, financial-transaction data is as critical a privacy issue as geolocation tracking.
As such, the choice to add apps to devices should not be taken lightly.
Why should people be extra-vigilant about protecting transaction data?
If you employ an app for most of your daily purchases, even without using geolocation data (GPS or Bluetooth) technology, the app could create a fairly comprehensive profile of you by shadowing almost all of your movements.
Data aggregators have discovered that it could even serve as a proxy for geolocation data.
From your local grocery store, to your preferred subway station or parking deck — financial-transaction data (could potentially) reveal more about you than could be gathered by geolocation-tracking technology.
You’ve noted that financial-transaction data is far less legally protected than geolocation data. Why?
There’s no comprehensive federal privacy bill that reaches across these distinct classes of information.
If the government imposes obligations for app designers to protect adopters’ privacy, this could be a first step to prevent the commodification of these intimate details.
In the context of the digital contact-tracing apps, the battle has been about GPS and Bluetooth tracking — but that’s almost a red herring.
Legislation geared toward preventing Bluetooth or GPS tracking is only partially addressing the concerns.
Are most people aware of the different kinds of data they generate in daily life?
I don’t believe most people are aware of the extent to which they generate and share different types of data with private technology platforms.
For example, an app that helps consumers save money by rounding transaction amounts up to whole-dollar amounts has access to consumers’ credit or debit card data.
This access also enables the app to learn much more about the consumer’s spending habits, debt obligations and relationships with others who frequently transfer money to the consumer or receive transfers from the consumer.
Notice-and-consent provisions are often woven into boilerplate use agreements, making awareness even more unlikely.
What are the most prominent types of data-tracking and what is their impact on personal privacy?
What data is gathered is not clear to the consumer in most instances.
Netflix, for example, doesn’t tell us what exactly they’re tracking. We know they may track which movies we watch from beginning to conclusion. That’s obvious.
What may be less obvious is the possible tracking of how many seconds we spend hovering over a particular image in the app or on the platform.
That data may not be used solely to predict what movies we would like or dislike.
It could be resold or repurposed to assess something about us: for example, the likelihood that we would be later diagnosed with Parkinson’s.
There was a study that said the amount of time it takes for you to click a mouse based on the kind of question or data that you’re engaged with could reveal the likelihood of being diagnosed with Parkinson’s later in life.
My point is, we’re never clear about the data that’s being gathered.
You’ve also noted concerns surrounding the integration of “alternative data” in the consumer credit-scoring processes. Which data fits that description?
There’s no universally adopted definition of “alternative data” and no formal policies regulating its use.
Over the last several years, financial technology — or “fintech” platforms — began developing profiles of consumers for purposes beyond advertising.
A number of them claim that integrating non-traditional credit data — like social media or mobile data produced from browsing, shopping, bill and loan payments — into consumer-lending processes will lead to greater inclusion of marginalized borrowers.
But consumer advocates have raised alarms — noting that the information may serve as proxies for borrower attributes such as race, sex, marital status or national origin — identifiers that have long been protected by federal and state anti-discrimination laws.
How might data-tracking affect people of different socioeconomic levels?
Whatever data is gathered could be used in mechanisms for evaluating eligibility for something.
In the context of government-sanctioned use of platforms, it might be access to nutrition programs like SNAP (Supplemental Nutrition Assistance Program) or government-supported housing options.
In the context of private adopters or firms, it could ultimately affect eligibility for employment or credit.
It is private platforms that evaluate data to determine whether individuals at any level on the socioeconomic spectrum are eligible for consumer borrowing for housing or mortgage or student loans.
Also, there are people who may have elected to use cash as a mechanism to preserve privacy, but there’s a significant population in the country that doesn’t have a choice but to rely on cash or cash equivalents.
The pandemic really became a hindrance for that group. I’d like to see the numbers of adopters of digital banking over the period of the pandemic.
The choice to go into a bank versus using an online platform, or the choice to have a smaller digital footprint or not participate digitally is increasingly dissipating.
The service itself is largely encouraging you towards digitization because it facilitates the capture of certain information, such as where your funds have come from.
Is there a relationship between this and accepting cookies on every website you visit? What can be done to disrupt having your data tracked?
One approach is to simply opt out. In other contexts, opting out is less obvious.
If we allow government entities or private-sector firms to mandate the adoption of contact-tracing apps — or even if there is a voluntary rollout of apps that don’t have sufficient safeguards — everyone who consents for purposes of this public health emergency would have consented for data to be captured.
In the absence of a safeguard that prevents the collection of (non-essential) data, we cannot be assured that the developers who design digital contact-tracing apps will limit themselves to GPS or Bluetooth geolocation information.
Even if those two are prohibited, there’s no express prohibition on all other forms of data, unless that is built into legislation.
I’m attracted to digital contact-tracing as a case study because now people are being forced to choose between opting out and protecting their health. Or, opting out and having their kids go to school.
These are the real choices people are making: opting out or going to work.
The choice to not-consent very much disappears when the choice is access or lack of access. If the thing that you’re seeking access to is crucial to your welfare — if a school or employer adopts contact-tracing apps — the notion of consent is a farce.
That kind of Sophie’s Choice is paralleled when we talk about financial-transaction data being tied to access to banking or mortgage applications.
You don’t really have an option.
C.J. Thompson is a New York writer.
Source:
- Stanford Magazine: Your Computer May Know You Have Parkinson’s. Shall It Tell You?