What You Need To Know About Section 230

Chad M, Co Founder (IPA)
20 Dec 2025
The Communications Decency Act of 1996 was U.S. Congress’s first attempt at regulating pornographic content on the internet.
Passed by Congress on February 1, 1996, and signed by President Bill Clinton on February 8, 1996, the Act imposed criminal sanctions on anyone who:
- knowingly (A) uses an interactive computer service to send to a specific person or persons under 18 years of age,
- or (B) uses any interactive computer service to display in a manner available to a person under 18 years of age, any comment, request, suggestion, proposal, image, or other communication that, in context, depicts or describes, in terms patently offensive as measured by contemporary community standards, sexual or excretory activities or organs.
This child exploitation investigation highlights a disturbing trend that IPA encounters regularly: offenders who begin with online exploitation rarely limit themselves to digital crimes alone. The case demonstrates how predators use technology as a gateway to more serious physical abuse, putting children at even greater risk.
The Act was a noble attempt to protect children from inappropriate content on the internet. However, since 1996, Section 230 was added which grants immunity to online service providers from illegal content posted or published by its users, even if the provider fails to take action after receiving notice of the harmful or offensive content.
The premise for this amendment was to protect providers from being held liable for the actions of its customers. I understand the purpose at the time, but just like everything, the internet and internet services have evolved way beyond the intentions of Section 230.
The premise for this amendment was to protect providers from being held liable for the actions of its customers. I understand the purpose at the time, but just like everything, the internet and internet services have evolved way beyond the intentions of Section 230.
Social media providers are no longer just providing a service to allow users to post and publish content. Instead, social media providers are monetizing users behaviors, regardless of the legality of those behaviors, and connecting them to other users with similar behaviors. For example, posts for prostitution services can be found on popular, U.S. based social media sites daily.
These posts include photos and videos of men and women with detailed lists of sexual services provided and the costs for those services. Law enforcement routinely identifies these as ads placed by criminals trafficking underage victims or victims forced to provide the services. Social media providers make those posts searchable, and may even direct other users to the content based on their online behaviors.
How can a free service be monetized into billions of dollars? It’s done by connecting a user’s behavior to other users, groups, pages, and content directly related to similar behaviors.
It is commonplace for social media users to receive notifications from the platform, not from other users, directing them to content which is illegal, such as prostitution or child exploitation. The algorithms don’t care about the legality of the behaviors or of the content.
The providers of social media do not care about the legality of the content because they are no longer held accountable. And because they are not held accountable, social media is no longer a safe place for children. We are not algorithms.