Author of “Reading Our Minds: The Rise of Big Data Psychiatry”
By Samantha Stone
Dr. Daniel Barron, is a Yale-educated psychiatrist, author and the incoming medical director at the Interventional Pain Psychiatry Program at Harvard’s Brigham and Women’s Hospital. His first book, “Reading Our Minds: The Rise of Big Data Psychiatry,” was published in April.
Barron is an advocate for using patients’ online activity as a diagnostic tool for mental health. In short, he believes a patient’s patterns – including phone, calendar, social media, web searches and more – can establish a benchmark for the patient’s “normal” behavior.
Barron argues that while other branches of medicine have used advanced technologies to improve outcomes, mental health professionals are using the same inexact method they’ve used for 100 years – questioning patients and looking for clues in their expressions and physical behavior as they answer.
He told Digital Privacy News data security and privacy are among his main concerns. He said patients must have complete confidence in such a system or it would never get off the ground.
This interview has been edited for length and clarity.
The clinical objective is to have more diagnostic certainty. Is it right that looking at a patient’s past online activity offers you a baseline for normal behavior, and then you can see deviations over time?
Clinical work requires data. In order to make informed decisions about treatment, you have to first know what’s going on.
Why does a patient come into your office, and how is that reason different from what’s happened over the last five or 10 years of their life? Then also, once you have an idea of what’s going on and how you can help, you need to find some way to understand whether your treatments are being helpful.
Right now, what we do in psychiatry is, we talk to our patients, and there’s a wealth of very good data in that clinical conversation. And so what these digital tools offer is greater precision in the level of data that we’re gathering and sharing in the clinical interaction.
Are you getting into what some would say is Brave New World territory when you infer meaning from brain functions?
I think one of the ways some of the AI algorithms are steering in the wrong direction is by trying to imbue meaning into something. By essentially trying to anthropomorphize the functions of an organ.
So, I’m speaking from a very clinical perspective. I’m not trying to predict who’s going to do well in a job, or how much someone is going to like a new car model that’s coming out.
All I’m trying to find is a way to measure what’s going on in an organ that I’m trying to treat with either medications or therapy or interventions.
Your book discusses various experimental uses of the tools in place now. What are some examples?
There are multiple efforts going on. One of the things I try to drive home in the book is, these technologies currently exist and they’re already being used by tech companies or by governments to try and understand our likes and dislikes.
However, one of the ways I feel this data could be helpful is by giving ownership, and by securing this data in such a way that patients can have access to it and their clinicians can access it.
So, it’s a little bit different. What I’m suggesting is that I would like to have access to more data in the clinic to make decisions, and I think this data would be useful, but I currently don’t have access to it.
And one of the hurdles to incorporating these techniques is that the big data collectors don’t want to give you access?
Yes, and this is one of the big policy questions that I think our field and our society are going to have to manage.
How valuable is this data to our health care is one giant question, which is the research part of my book. These are testable questions, things that we can answer.
But even if it does prove helpful, how do we manage relationships with tech companies to be able to have access to that data in a clinical world, but also to be able to secure that data, to make sure that we protect the privacy of the patients?
Those things have not been figured out yet.
The nature of digital data is that it’s inclined to escape its custodians. In fact, the health care entities are demonstrably poor data custodians. The field is notoriously vulnerable to data breaches.
I completely agree, and I think that’s one of the reasons we need to have that conversation. I’m assuming the tools to be useful, that’s part of the thought experiment here.
One of the ways research groups have already started to address the privacy and consent issue is by having a new position called a Digital Navigator.
This Digital Navigator would sit down and explain what the data is, explain how long they’d want to collect the data, who would have access to the data, and then also have security measures in place to not have data security be a kind of a joke.
If we’re going to do this, and feel comfortable telling our patients we’re going to keep their data secure, then we really need to back that up. There are multiple HIPAA compliance measures right now, but I don’t think those will be enough.
There are a lot of ideas about how we could promote better data security in the future. A lot of people are wondering whether the blockchain would be helpful. I think it’s important and it’s a conversation we need to have.
Not every tech company is reluctant to get involved in this. You’ve worked with IBM, and there are others who are out there exploring this. Can you give an overview?
I’ve been working with Guillermo Cecchi’s group at IBM, he runs the computational neuroscience and psychiatry division there. I started working there as an intern after I’d written a few pieces on this.
However, one of the things I bring up in the book is, they don’t actually have access to patients or patient data. They’re a tech company, so they don’t have a hospital, for example.
I have access to patients. I’m a physician, I know how to code a little bit, but I’m nowhere near as sophisticated in terms of my ability to code programs or use machine learning algorithms as they are. They do it full-time.
I also don’t have access to an enormous computer cluster the same way that they do. So finding a way to partner, and to learn, and to benefit from each other’s skills and assets has been really helpful.
A lot of the hopes that we were discussing at the beginning of our collaboration – that we would be able to find some way to quantify the mental status exam, and to trace that in the same way you would use blood pressure over time – look like something that we can begin to do.
I don’t want to overstate our conclusions at all, but it looks like we’re going down the right track.
You favor using technologies that measure physical movements and characteristics. How is that helpful?
Yes. Another level of data which I have a lot more personal experience with, is in just recording clinical conversations. And then mining that digital recording for very subtle changes in facial expression, language expression, acoustic properties of the voice.
One of the things that is documented in every clinical encounter is called the mental status exam. It’s essentially how someone’s facial expression mirrors conversation, how and what someone says, their intonation, the speed of their speech, the quality of their voice, all those things.
Right now, I would sit down after seeing a patient and describe, in a word or two, each of those qualities of that conversation. This is equivalent to 150 years ago, you’d feel someone’s pulse and say, “Oh, they have a rapid pulse.”
Now we measure pulse rate, we say over 100 is rapid. So essentially what we’re trying to do is the same thing. Can we be more precise, and be more clinical about our work here?
The subtitle of the book – “The Rise of Big Data Psychiatry” sounds like an episode of “The Black Mirror.” It also suggests we’re pretty far down the path with this. But there are a lot of constituencies to be persuaded, not least of which are privacy advocates.
I completely agree. I’m not a privacy expert, and there are many people who are more qualified and with much more experience thinking about the subtleties of what’s going to have to happen here.
I feel like I’m being very honest about what I think my limitations are as a clinician, and what the possible benefits of these tools might be. I don’t know what the world would have to look like in order for this all to come together, to make all the parts fit.