Skip to Content

Press Release

Castor, Trahan Lead House Members Demanding Answers on Meta Profiting from Sexualized Content

WASHINGTON, DC – Today, Reps. Kathy Castor (FL-14) and Lori Trahan (MA-03), members of the House Energy and Commerce Committee’s Innovation, Data, and Commerce Subcommittee, led a request to Meta demanding answers on investigative reports showing the company profiting from Instagram serving sexual and exploitative content to users with interests in content featuring children, including risqué photos and videos.

“We are deeply concerned by reports indicating that Meta and Instagram steer users toward sexualized videos through associations with children and that the platform runs ads alongside such content without advertisers’ knowledge and in violation of their policies,” the lawmakers wrote. “This is highly disturbing and suggests that Meta’s algorithms serve the prurient interests of pedophiles, and that Meta is aware of the practice and chooses to maximize their profits while turning a blind eye to the harm caused by sexualizing children.”

A recent investigation found that Instagram’s Reels feature will show videos promoting adult sexual content and subsequently branded advertisements to adult users that follow youth gymnasts, cheerleaders and influencers. The investigation follows a Wall Street Journal report from June that found Meta algorithms help form large communities of users with interests in pedophilic content, which the company struggles to remove from its services.

“Meta’s inefficiency and selectivity in responding to these concerns has again been demonstrated by the Journal, which found that Meta’s content moderation contractors are not being adequately trained, that enforcement actions against objectionable groups and accounts are often not effective, and that Meta’s own decisions about taking down groups and accounts related to child sexual abuse are “routinely inexplicable,” the lawmakers continued.

Trahan and Castor have consistently pushed for stronger protections for users, particularly children and younger users, on social media platforms. In 2021, they spearheaded the congressional effort to stop Meta’s plans to create an Instagram for Kids and pressed Facebook on research suggesting the company was targeting advertisements to teens based on young users’ browsing history and preferences. In 2022, the lawmakers demanded answers from Meta on reports that the company’s algorithm was pushing eating disorder content to children and teens. The lawmakers have also introduced multiple bills to update the Children’s Online Privacy Protection Act and implement comprehensive transparency requirements from social media companies.

The letter sent today was signed by Representatives Debbie Dingell (MI-06), Lisa Blunt Rochester (DE-AL), Adam B. Schiff (CA-30), Sean Casten (IL-06), Jake Auchincloss (MA-04), Jan Schakowsky (IL-09), and Pramila Jayapal (WA-07). The lawmakers requested answers from Meta to the following by December 22nd:

1. What safety assessments were performed on the Reels feature of Instagram before it was released?

a. Did any of those assessments focus on risks related to CSAM, sexualized content, child safety or brand safety?

b. Were any of those assessments performed by independent third parties?

c. What recommendations were made to increase the safety of the Reels feature, and were those recommendations adopted or accepted by Meta’s leadership? If not, for what reason were those recommendations not accepted?

2. Of the actions detailed in the recent blog post entitled “Our Work to Fight Online Predators,”10:

a. Which of them were explored or proposed before the introduction of Reels or before the recent press stories highlighting Instagram’s child safety failures?

b. Why was Reels launched without these child safety measures in place?

3. By Meta’s own estimates in the post, the new enforcement measures have resulted in over 4 million more reels actioned per month, and 16,000 additional groups and 250,000 additional devices actioned against since the summer. In addition, Meta claims that 500,000 additional accounts were disabled in August 2023 for violating child sexual exploitation policies.

a. How long were the newly disabled accounts, groups, and devices operating on Instagram until the enforcement actions were taken?

b. By Meta’s own estimates, how many reels should or would have been taken down under the new automated enforcement policies since the introduction of the Reels feature?

4. In the blog post, Meta mentioned that it “fixed technical issues, including an issue that unexpectedly closed user reports.” The Journal’s reporting also mentioned that the issue prevented “a substantial portion of user reports from being processed.”

a. How long was the bug in place?

b. When did Meta first detect this bug? How long did Meta take to fix the issue after it was detected?

c. How many reports were closed by this bug? What percentage of total reports were closed by the bug?

d. How many of them related to adult content or child sexual abuse?

e. Has Meta reopened those users' reports, and resumed acting on them?

5. Why do Instagram’s algorithms promote sexualized content to accounts that have not demonstrated an interest in such content, as the Journal’s reporting showed?

a. What topics do Instagram’s algorithms associate with sexualized content?

b. Do Instagram’s algorithms associate content related to children with sexualized content or promote sexualized content alongside content related to children, and if so, why did Meta not prevent that from happening?

c. Has Meta considered any technical measures to address the promotion of sexualized content on Instagram, and, if so, which of those measures have been adopted?

6. Does Meta enforce its policy banning adult content in ads on Instagram?

a. If so, how many ads on Instagram were rejected or not shown by Meta because of adult content?

b. How many ads were reported on Instagram because they contained adult content?

7. Does Meta respect the restrictions advertisers impose on the placement of their advertising?

a. Does Meta track violations of such restrictions?

b. How many times was an ad shown next to, preceding, or following, adult content on Instagram in violation of the advertiser’s restriction?

c. What transparency do advertisers on Meta have regarding the content that their ads are shown alongside? Do advertisers have access to a random sample of content that their ads were used to monetize?

8. How much revenue has Meta received from selling ads next to adult content in violation of advertiser’s preferences or from ads reported for containing adult content?