-
Emily Hutto is an Associate Video Producer & Editor for MedPage Today. She is based in Manhattan.
In this exclusive video interview, MedPage Today‘s editor-in-chief Jeremy Faust, MD, sits down with U.S. Surgeon General Vivek Murthy, MD, MBA, to talk about his advisory on social media and youth mental health.
The following is a transcript of their remarks:
Faust: Hello, I’m Jeremy Faust, Editor-in-Chief of MedPage Today. I’m so glad to be joined today by Dr. Vivek Murthy.
Dr. Murthy is the 19th and 21st Surgeon General of the United States, and he has issued a number of very important public health advisories, which we will discuss today. Dr. Vivek Murthy, thank you so much for joining us.
Murthy: Of course. I’m glad to join you, Jeremy.
Faust: Recently, I think there’s been a lot of talk about social media and your advisory regarding social media use. I’ve heard you speak in a very nuanced way about this, and my initial take is that the dose makes the poison or that certain forms are less harmful than others. And right there on page five of your report, it says ‘Social media has both positive and negative impacts on children and adolescents.’
I think we could talk about some of the negative ones, but I’d like to start with something positive. What are some things that you think are actually positive about social media for young people?
Murthy: Well, I think some young people do get some benefits, and those include things like the opportunity to express themselves more freely or creatively and the opportunity to find a community of people that they may not have in person — people who may share their life experiences or concerns or interests. It’s also an opportunity for some people to stay in touch with old friends, old high school friends, or friends from college after you graduate. All of that can be beneficial.
But like with any other product — and this is something I think clinicians understand well — you have to assess the risks and the benefits associated with the product. One of the things I worry about is that the conversation around social media often is devoid of nuance. It’s often very black and white, and people say ‘Is it good or is it bad?’
Well, we know as clinicians that sometimes even when a medication has some benefits for a population, if there are significant risks associated with it overall, you may not say that that’s advisable, right? Remember with Vioxx, for example, there were many pain patients who actually were helped by Vioxx in terms of pain reduction; in terms of their arthritis pain or other types of pain. But when you looked at the overall risks of the population, like cardiovascular risks which became more evident down the line, it became clear that those benefits may not outweigh the risks for the population.
So this is a case where, yes, there are some benefits, but what I have been really struck by, Jeremy, are the harms and the growing evidence of harms. This is something I was hearing about a lot from actually not just parents, but from young people themselves across the country. It’s what prompted me to actually do the research and ultimately issue a Surgeon General’s Advisory on social media and youth mental health in 2023.
Faust: I do want to talk about how this goes from here, because I think that the report has, as always, a lot of factual basis and public health research behind it. And then the question is: OK, now what?
In the case of the black box warning on tobacco, for example, that’s one thing that obviously led to a lot of positive public health impact. And I’m curious how this is going to look when I open up social media. Is there going to be a terms and agreements that we all kind of just click away? I mean, what’s going to happen here? How is this going to play out in 5 years if you get what you envision?
Murthy: I’m glad you asked, because this isn’t simple, right? Just like we experienced with tobacco — a warning label was helpful with tobacco, we know now from years of data that that warning label actually did help to increase awareness of risks and change behavior — but the warning label itself was not the entire solution, right? It was one part of a larger solution set.
The same is true here with social media. I’ve called for a warning label to help parents and young people understand that, number one, we don’t have enough data to tell us that social media is safe for your kids, which is what parents want to know all the time. But second, to let parents know that what we are seeing in the data is that social media use is associated with mental health harms among adolescents.
But what I called for last year was actually even more important, which was a comprehensive set of regulations that would help make social media itself safer. And that’s regulations that would protect kids from harmful content, from features that would seek to lure their developing brains into excessive use like the infinite scroll, like buttons, like the autoplay on videos, the availability 24/7, but also I called for a series of measures that would require companies to be transparent with the data they have about the impact of their platforms on the mental health of youth.
Right now, Jeremy, one of the things that I find deeply disturbing — not just as a doctor or a surgeon general, but as a parent myself — is that researchers tell me all the time around the country that they can’t get full access to the data on the mental health impact of social media from the companies. As a parent, I don’t want to feel, with products my kids are using, that the companies are withholding or hiding information from me about its safety for my kids.
So these are a number of measures that we have to take. But lastly, I just say this, you asked about the warning: what would it look like? What we do with tobacco and alcohol labels is we actually undertake a rigorous scientific process to understand what kind of label would be most effective at driving the outcome of increased awareness. And so in that testing process, you test labels of different sizes with different fonts, with different text, different placement, and then you ultimately see what happens.
What would be most beneficial here too is, I imagine, a digital warning on social media would regularly pop up when people used their social media platform, but the frequency with which it pops up, the font size, whether there are graphics associated with it, what the text actually says, that would be determined in a rigorous scientific process.
Faust: Yeah, and I’m curious about how you can get the companies to share that data, but I’m also curious how you envision regulating that content.
Because, for example, there’s content that might be shown to my kids that might actually horrify me because of my particular values, right? I can even say that there are sections from the religious background of which I am a part of that I would not want my kids to see. Someone’s going to say ‘You’re going to ban the Bible?’ How do you actually tell these companies that that line from that book is not OK, but this line is fine — do you see what I’m saying? These are violent, homophobic images that I don’t want my kids to see.
Murthy: I think what you’re getting at is that there’s a big gray area in terms of content where sometimes one person might find it objectionable and another person may not. But this is where I think it’s important for us to focus on what we do agree upon with youth.
We all generally agree that minors should not be seeing pornographic content — I would hope that we can all agree on that. We certainly use that common understanding to help define what is acceptable in movies and that kids can see in television shows, for example. Yet right now, kids who can’t walk into a movie and watch a rated-R movie are seeing extreme sexual content on their social media feeds. We should all agree that that’s unacceptable.
The other thing that we can agree on is the fact that kids should not be receiving video content to them suggested by the algorithm that’s walking them through how they can take their own life, how they can attempt suicide.
Now, that might seem preposterous to some people who might say ‘Oh my gosh, how could that ever happen?’ But I’ll tell you that I’ve sat down with many parents who have had the experience, the tragic experience of losing a child to suicide after their child received numerous videos through the algorithm on their social media feed walking them through how to hang themselves or how to harm themselves in other ways.
There are buckets of content we can all agree they should not be exposed to, but more broadly, how do you exactly do that? This is where the companies have responsibility. They have created these platforms, they have created the algorithms that drive content to young people — and call me old fashioned, but I believe if you create a product, you should be responsible for the outcomes.
If you or I, Jeremy, got together and built a hospital and we’re providing care to patients, yet we had high rates of line-associated infections’ people were getting clots all the time because we weren’t prophylaxing patients when they came in; and people were slipping on floors because we weren’t wiping the floors and there was water dripping all over the floors. We couldn’t turn around and say, “Wait, hold on. Don’t hold us accountable because we’re actually taking care of people who are coming in with MIs [myocardial infarctions] and with cellulitis and with other clinical complications.” People would rightly say, ‘Yes, that’s great you’re doing that, but you’re responsible for creating a safe environment for your patients and your visitors. If you’re not doing that, then you need to be held to account.’
What I find really striking, Jeremy, is how we have largely held companies to account when it comes to the products and services that they provide to our kids with the exception of social media, despite the fact that 95% of our kids are on it, despite the fact that we have now hundreds if not thousands of reports of kids who have been harmed through their use of social media. Yet what would’ve triggered an investigation and action in other products, and what we’ve seen trigger action when it comes to food or car seats or cars when they have safety issues, has not triggered similar action when it comes to social media.
I find that highly problematic. It’s why I’ve called for a fundamental change in our approach to social media and safety.
Faust: And I really appreciate the sort of harm/benefit conversation that you’re talking about, especially. I think it’s a perfect analogy with hospital care, right? If you are having nosocomial infections, then CMS is going to shut you down if you’re not doing enough work on that.
But I am curious, what does a safety study look like here? I’ve heard people say ‘We don’t know how safe or unsafe social media for kids is or for adolescents.’ What would a safety study look like? I know what a Kaplan-Meier curve looks like for a cancer therapy, I know what that study looks like. How am I going to know as a parent and even as a physician guiding people that yes, this is safe, this is unsafe? What kind of studies are we looking for?
Murthy: Well, this is where I think it’s so important to have to be investing in the research in this space. Not just so we can do the studies, but so we can actually cultivate the independent research community that can help drive these studies.
There are a number of things we can do to assess safety. We know the companies actually test new features all the time, right? Features which may seek to limit use at certain times of the day or limit exposure to certain types of content. One of the first things we can do is try to understand what those data points and experiments have taught us about the mental health impacts on youth. So, there’s data right there for us to draw from.
We also know that what are sometimes called ‘deprivation studies’ or studies where you have people actually stop using social media for a period of time and then assess them afterward in terms of their self-reported subjective mental health benefits, as well as clinical outcomes that can be helpful as well.
I’ll tell you anecdotally what young people tell me all the time on college campuses in particular is that when they pull back and stop using social media for a period of time — it positively affects their mental health and well-being. The first couple of days they feel jittery, because many of them actually will describe themselves as being addicted to social media and have a hard time getting off. But after 3, 4, or 5 days when they settle into a rhythm, they start feeling really good.
Now, I’d want that validated in a study, right? I’d want that looked at in a broader population. But the bottom line is there are a lot of research questions that investigators are asking right now that they need the resources and data to be able to answer. And if we can do that, then my hope is that we can get a better sense of safety.
But also, Jeremy, my hope is that we can empower parents and young people themselves to engage in the kind of practices that can ultimately lead them to find that better balance between the benefits of social media while they avoid the harms.
Faust: Yeah. I think that it’s really important, as you say here and elsewhere, that there are people – and I think of a teenager who may be emerging as LGBTQ and doesn’t know where to turn — we don’t want parental consent on their ability to access a community online or that kind of a thing, right? Would you agree with that? For those kinds of people and that population, there’s a benefit there to having a little bit of privacy as well.
Murthy: I’m glad you brought this up, Jeremy. I do think — and I’ve spent a lot of time with LGBTQ youth on topics related to mental health over the last 3.5 or 4 years — here’s what we know:
We do know that LGBTQ youth, many of them do find community online at a time when they may not have that in-person community. That can be really valuable. It can be lifesaving. What we also know is that LGBTQ youth are significantly more likely to experience harassment and bullying on social media, which can also have its harms.
The question becomes: how do you balance this out? What I worry about, Jeremy, is that we’ve put LGBTQ youth and their families in an impossible situation where we’ve said, ‘In order to get some of these potential benefits, you have to expose yourself to all of these harms.’ That is not a choice that LGBTQ youth should have to make. They deserve to have a safe place where they can find and build a community, whether that’s in person or online.
This is where, to me, pushing for safety measures is not just about preventing people from using social media altogether. It’s about ensuring that those people who are benefiting from it can continue to do so without being exposed to all of these harms that we are seeing.
Disclosures
Faust disclosed being a paid writer for Bulletin, a newsletter service owned by Meta, from 2021-2023.
Please enable JavaScript to view the