Q&A: Sen. Bill Cassidy, R- La.

Privacy Proposal Would Protect Consumers, COVID Health Data 

By Rachel Looker 

For Sen. Bill Cassidy, R-La., promoting public health while protecting consumer privacy goes hand in hand during a public-health crisis like COVID-19. 

He recently introduced the Exposure Notification Privacy Act to limit data-collection and usage while requiring the involvement from public-health officials in deploying contact-tracing apps and other exposure-notification systems.

The legislation would give consumers control over their health data by highlighting voluntary participation, user consent and the right to delete data on exposure-notification systems.

Cassidy introduced the bipartisan legislation in June with two Democrats, Sen. Maria Cantwell, Wash., and Sen. Amy Klobuchar, Minn.

He told Digital Privacy News that Congress was overdue in examining at how personal health information was being marketed for commercial purposes, unbeknownst to users.  

Sen. Bill Cassidy, R-La.

Age: 62.

Birthplace: Highland Park, Ill.

Education: Louisiana State University (B.S. and M.D.).

First Elected to Senate: 2014.

Key Committees: Finance, Joint Economic Committee, Health Education Labor and Pensions, Energy and Natural Resources, Veterans Affairs.

U.S. House Service: 2009-2015.

Personal: Married, three children.

What do you hope to accomplish with the Exposure Notification Privacy Act? 

Right now, there’s a lot of concern that when somebody tests positive for COVID — or tests positive for you name it — that they end up getting things marketed to them because that information, that sensitive information, is sold to third parties.

If I’m going, to for the sake of public health, notify people that I’ve been exposed so people can go back and look and see if they’ve been around me — now, theoretically, this is all done anonymously, but I’ve been told over and over again it’s pretty easy to reverse-engineer and find out who somebody is — then I’m not going to have confidence in that system.

“My medical information is protected health information — and it should not be used to market to me.”

Even if I do have confidence, I shouldn’t have confidence in that system.

Our point is to create confidence so that people will use it for the public-health purpose.

If you are more comfortable or likely to participate, then you will participate — and we have the power of that participation to help others think about their risk for having been exposed.

What privacy concerns do you see with exposure-notification systems like contact-tracing apps? 

We have HIPAA laws.

As a physician, I am very aware that my medical information is protected health information — and it should not be used to market to me, and it should not be used prejudicially or it should not be used in any other way but for the benefit of my health without my permission.

I want that same sort of protection extended to the people who volunteer with us. Now, we can imagine the issues that would arise, but it almost doesn’t matter what the issue is.

It is the general concept.

This should be considered protected health information — and it should not be shared without my permission.

If it’s shared with my permission, then there should be clear guidelines as to what is done with the data.

That’s just what every health care provider is trying to believe — and it has to be understood as personal health information.

What are the implications of personal health data getting into the hands of large tech companies for commercial use? 

It calls for a different perspective on how that is being used.

People don’t realize that their health data, combined with social media data, can be used in ways that are prejudicial against them.

Imagine somebody has a watch that monitors their gait — and all of a sudden, an insurance company or some other company is buying that data?

An employer might decide: “I’ve seen data from a wearable that indicates that this person is falling regularly. I may have a condition which causes them to have increased health care risk. I am never going to employ them because of that risk.”

“I want that same sort of protection extended to the people who volunteer (data) with us.”

Or: “Look, here is somebody who is monitoring her menstrual cycles and is putting information in that makes me think that she would like to have a child. I’m not going to employ her because I know that if she has a child, she is going to be out of work for a period of time so, therefore, I’m not going to take that risk.”

All of that is using data that people routinely put or routinely convey from a wearable, which could be marketed to fill in the blank.

I had an employer tell me once that if he requested, he could get a social media profile of someone’s presumed future health care costs before he offered the person a job.

That is in the marketplace right now.

How do privacy statements or terms of use fail to inform users about the use of their health data? 

Of course, you could read the legalese and maybe come to an understanding of what it means — but, then, you’ve got enough faith that you click on this routinely, or if you perhaps have your child set it up for you, they’re going to hit accept, accept, accept.

I’ve never talked to anybody who reads the privacy statements, even people who are concerned about privacy, because they’re so dense.

But this is our personal health information, which is being marketed without us clearly knowing that that is the case.

Why should public-health officials be involved in deploying exposure-notification systems? 

Theoretically, they are the ones who are using it — and they’re the ones who would think it’s use would be justified.

And also, by the way, you would want them to know if there’s a super-spreader event at a bar or a party — and you can look and see that everybody who was there five days before is now testing positive.

“If it’s shared with my permission, then there should be clear guidelines as to what is done with the data.”

It allows them to do epidemiology work, so it’s both for the protection of the individual … and is it being collected in the most appropriate way?

And then it can be that they can gain a lot of information that could then help combat an epidemic. 

Why do you think there are several other bills in Congress aimed at protecting health data? How does the Exposure Notification Privacy Act differ? 

Frankly, we’re probably overdue looking at this.

It’s kind of a Wild West out there, where if you just think about genetic information, people are willingly giving up their genetic information.

It’s not just information about themselves, but information about their siblings, their children, their cousins, their parents.

So, if somebody has a gene that would indicate a potential for a future health issue, whatever that future health issue might be, it impacts everybody they’re genetically related to.

I’m not sure that folks know that that could be the case, or the people they’re genetically related to are giving permission for that.

“People don’t realize that their health data, combined with social media data, can be used in ways that are prejudicial against them.”

This is, if you will, a common right — but that’s something that has not been looked at at all.

So, there’s a lot of things happening right now that I think need to be looked at. 

How do you think the global pandemic has brought these health-data privacy issues to light?

I’m not sure.

Of course, I think about it all the time because that’s my medical background.

Cantwell thinks about it — she’s from Washington state — and I think she just tends to think about health issues.

It’s a charge for me to be objective.

I’m just not sure.

But I do know that we better start thinking about it, because, what I described to you earlier, the employer receiving the information regarding somebody’s social media post is real.

It’s now — and it is a preface of discrimination.

It’s to choose to discriminate frankly, illegally, but it’s still used to discriminate — and it’s not the kind of society that we want.

In what ways can this information be used to discriminate?

You can just look at somebody’s searches.

Is somebody searching for a particular disease, you the employer decide they may have this because they search for it all the time.

They’re reading about it all the time — and, so therefore they’re going to be at a higher risk because this is an expensive disease to cover.

You can come up with example and example, which is not theoretical, but very based in reality — so I don’t know if people are thinking about it yet, but they absolutely should be. 

Rachel Looker is a Washington writer.

Source (external link):