Senate Democrats and Republicans came together to decisively approve the “Kids Online Safety and Privacy Act.” Proponents describe it as a necessary measure to protect children from drug- or gambling-related advertising on social media. But there’s widespread fear among advocates that the bill will effectively censor both harm reduction content meant to keep people safe and advocacy for drug policy reform.
The legislation, added as an amendment to Senate Bill 2073, passed the Senate by a 91-3 vote on July 30. With near-total bipartisan support, the only senators to vote against it were Mike Lee (R-UT), Rand Paul (R-KY) and Ron Wyden (D-OR). The six senators who didn’t vote included one who was just convicted on 16 felony counts and another who is running for vice president.
If it becomes law, the bill will make social media companies responsible for preventing minors’ exposure to certain content on their platforms, with enforcement mostly down to the Federal Trade Commission (FTC) and state attorneys general.
Its text includes a provision saying that platforms have to take “reasonable care” in working to “prevent and mitigate” certain harms to children. These include mental health conditions like anxiety, depression and suicidality; substance use disorder and “addiction-like behaviors”; violence, bullying and harassment; and sexual abuse. Platforms would also be required to prevent promotion to minors of “narcotic drugs” (taken to include opioids and cocaine in various forms); “tobacco products” (including vapes and other harm reduction options, as well as cigarettes), gambling and alcohol.
The text states that platforms may allow content discussing “prevention or mitigation of the harms.” But advocates believe that educational or advocacy material is likely to be swept up in an automated censorship dragnet.
“Whatever the FTC or platforms think is harmful to youths is likely to be taken down … content moderation tools struggle to tell the difference between say, pro-drug content and pro-drug advocacy.”
Jenna Leventoff, senior policy counsel for the ACLU National Political Advocacy Division, slammed the bill as an “act of government censorship” in a July 30 statement, saying that “The House must block this dangerous bill before it’s too late.”
“The problem is whatever the FTC or platforms think is harmful to youths is likely to be taken down,” Leventoff told Filter. “That’s going to be compounded by the fact content moderation tools struggle to tell the difference between say, pro-drug content and pro-drug advocacy. It can’t tell that you are advocating for the legalization of substances as opposed to selling that substance or encouraging kids to take that substance. It’s going to have the same keywords and images.”
Shoshana Weismann, digital media director for the R Street Institute, which works on issues including harm reduction, agrees on the nature of the risks, when the bill is likely to make platforms extremely risk-averse.
“This is really going to hurt [educational and advocacy] content,” she told Filter. “It’s really hard to differentiate content trying to sell drugs to kids versus content about the problems with drug use, or here’s how to get help. Even a thorough study on the uses and misuses of marijuana probably would be blocked for kids because platforms don’t want to take the chance that it would get a child into marijuana, or be content they would be sued for.”
“I think the reason it got so much Senate support is because one, it’s ‘for the kids’,” Weismann continued. “If you’re going to have opposition to it, it has to be really solid and specific. Even then, you’re going to take a lot of heat if you oppose it. Then I think lawmakers think social media is causing all the world’s ills, despite meager evidence … there’s evidence that a lot of minors use social media in really productive ways and some don’t, but it has to be something that’s handled at the family level.”
The bill’s other provisions include making platforms responsible for limiting communication between children and other users, and restricting users from accessing children’s personal data. Platforms would have to provide parents with tools for managing children’s privacy settings, and restricting minors’ ability to make purchases and spend time on social media.
“It would put the tools of censorship in the hands of state attorneys general, and would greatly endanger the rights, and safety, of young people online.”
Criticism has also come from the Electronic Frontier Foundation, a nonprofit focused on digital civil liberties.
“Today’s version [of the bill] would still require surveillance of anyone 16 and under,” Jason Kelley, EFF’s activism director, wrote in a May blog post. “It would put the tools of censorship in the hands of state attorneys general, and would greatly endanger the rights, and safety, of young people online. And [the] burdens will affect adults, too, who will likely face hurdles to accessing legal content online as a result of the bill.”
The bill’s vague standards for what constitutes liability to harm children, Kelley wrote, would encourage platforms to be overly cautious in censoring content. He gave several examples of how censorship could depend on the interpretation of a platform or public official: Would they, for instance, see syringe programs or overdose prevention centers as necessary, life-saving services, or as “enabling” addiction or crime? Kelley speculated that material about a wide range of politically charged issues, from gender-affirming care to religion and guns, could all be interpreted in such a way as to earn censorship.
Censorship isn’t the only issue, however. Safety and privacy—for adults, as well as children—are also concerns. Because how would social media platforms identify which users are minors?
The bill doesn’t require age verification in order to use social media, but both Leventoff and Weismann, among others, see age verification on the platforms as the logical outcome. That would mean everyone who wants to use social media having to submit their government identification, or perhaps use facial recognition or other software.
“If you don’t have ID because you are undocumented, you’re a senior citizen without a driver license, don’t have the digital skills to upload it—all these reasons—you’re not going to be able to access the internet.”
Online age verification has already been a contentious issue. In July, the state of Idaho implemented a law requiring anyone accessing online pornography to submit their ID, in the name of protecting children. In response, PornHub, one of the biggest online porn platforms, decided to shut down its website in Idaho. As of 2024, a total of 16 (mostly Republican) states have passed laws similarly restricting porn sites.
Broadening online ID requirements would discriminate against many vulnerable and disadvantaged people, Leventoff said. “If you don’t have ID because you are undocumented, you’re a senior citizen without a driver license, don’t have a passport, forgot to renew your ID, don’t have the digital skills to upload it, you’re disabled—all these reasons—you’re not going to be able to access the internet at all.”
For those who do have ID and continue to use social media, such requirements and the large-scale collection of personal information could also create a heightened risk of identity theft. That’s a particular issue for minors, as Weismann explained in a blog post, citing an estimate that 25 percent will be victims of identify theft before they turn 18. Long-term consequences may include being denied credit due to fraud, or even getting a criminal record for something you didn’t do.
“This is going to make child ID theft a huge problem: When you have a database of kids’ most sensitive information, it’s a huge target for hackers,” Weismann said. “Lawmakers are completely overlooking this.”
Widespread ID requirements would further remove what little internet privacy or anonymity is left—potentially resulting in an online atmosphere reminiscent of George Orwell’s 1984.
“There’s a chilling effect because you’re no longer able to browse the internet anonymously,” Leventoff said. “If you have to turn over your ID before going online, and you’re embarrassed by whatever you’re looking at, be it illegal or not, and you know your name, address and phone number will be associated with that browsing history, you’re going to be deterred from looking at that information.”
If such information included harm reduction material, such discouragement could be deadly.
Image (cropped) by EFF-Graphics via Wikimedia Commons/Creative Commons 3.0
The Influence Foundation, which operates Filter, has previously received restricted grants from the R Street Institute to support harm reduction reporting projects. Filter‘s Editorial Independence Policy applies.