Communications Decency Act § 230
Communications Decency Act § 230 (1996)
1) Link to the Text of the Act
Read the statute (47 U.S.C. § 230)
2) Why It Was Done
Enacted as part of the Telecommunications Act of 1996, § 230 was designed to promote the growth of the internet by shielding online platforms from liability for most user-generated content while allowing them to moderate harmful material in “good faith.”
3) Pre-existing Law or Constitutional Rights
Before § 230, online platforms risked being treated as publishers and held liable for user posts. This threatened free expression and innovation online. The law built on First Amendment free speech protections but gave platforms explicit legal immunity.
4) Overreach or Proper Role?
Supporters argue § 230 enabled the growth of social media and online communities by protecting platforms. Critics say it allows “Big Tech” to avoid accountability for harmful content, misinformation, and censorship. Both sides debate its scope and future.
5) Who or What It Controls
- Online platforms (social media, forums, ISPs, websites)
- Users (content creators protected by platform immunity)
- Courts (limited ability to hold platforms liable for third-party content)
6) Key Sections / Citations
- 47 U.S.C. § 230(c)(1): “No provider… shall be treated as the publisher or speaker of any information provided by another…”
- 47 U.S.C. § 230(c)(2): Platforms protected when moderating content in “good faith.”
7) Recent Changes or Live Controversies
- Ongoing bipartisan calls for reform or repeal, citing online harms, censorship, and misinformation.
- Cases before the Supreme Court (e.g., Gonzalez v. Google, 2023) tested the scope of liability but left § 230 intact.
- Still the backbone of U.S. internet law, but its future is uncertain.
8) Official Sources