Tag Archives: Section 230

Supreme Court To Hear Two Cases Regarding Section 230



The Electronic Frontier Foundation (EFF) posted information titled: “Section 230 is On Trial. Here’s What You Need to Know”. The EFF wrote about two court cases that involve Section 230.

According to EFF, the Supreme Court next week will hear two cases – Gonzalez v. Google on Tuesday, Feb. 21, and Twitter v. Taamneh on Wednesday, Feb. 22 – that could dramatically affect users’ speech rights online.

Nearly everyone who speaks online relies on Section 230, a 1996 law that promotes free speech online, the EFF wrote. Because users rely on online intermediaries as vehicles for their speech, they can communicate to large audiences without needing financial resources or technical know-how to distribute their own speech. Section 230 plays a critical role in enabling online by speech by generally ensuring that those intermediaries are not legally responsible for what is said by others.

The EFF pointed out that Section 230’s reach is broad: it protects users as well as small blogs and websites, giants like Twitter and Google, and any other service that provides a forum for others to express themselves online.

Courts have repeatedly ruled that Section 230 bars lawsuits against users and services for sharing, or hosting content created by others, whether by forwarding email, hosting online reviews, or reposting photos of videos that others find objectionable. Section 230 also protects the curation of online speech, giving intermediaries the legal breathing room to decide what type of user expression they will host and to take steps to moderate content as they see fit.

Vox reported that in 2015, individuals affiliated with the terrorist group ISIS conducted a wave of violence and mass murder in Paris – killing 129 people. One of them was Nohemi Gonzalez, a 23-year-old American student who died after ISIS assailants opened fire on the café where she and her friends were eating dinner.

Vox also reported that on New Year’s Day 2017, a gunman opened fire inside a nightclub in Istanbul, killing 39 people – including a Jordanian national named Nawras Alassaf who had several American relatives. ISIS also claimed responsibility for this act of mass murder.

According to Vox, Gonzalez’s and Alassaf’s families brought federal lawsuits pinning the blame for these attacks on some very unlikely defendants. In Gonzalez vs Google, Gonzalez’s survivors claim that tech giant Google should compensate them for the loss of their loved one. In a separate suit, Twitter v. Taamneh, Alassaf’s relatives make similar claims against Google, Twitter, and Facebook.

Vox pointed out that the thrust of both of the lawsuits is that websites like Twitter, Facebook, or Google-owned YouTube are legally responsible for the two ISIS killings because ISIS was able to post recruitment videos and other content on these websites that were not immediately taken down.

In my opinion, there is no way to know for certain how the Supreme Court will decide on these cases. We are likely to have to wait a while before their decisions are posted publicly.


White House Creates Guiding Principles For Big Tech Platforms



The White House held a “Listening Session On Tech Platform Accountability”. A varied group of people were invited, including Assistants to the President of various parts of the federal government, some people involved in civil rights causes, Chief Executive Officer of Sonos, Patrick Spence, and Mitchell Baker, CEO of the Mozilla Corporation and Chairwoman of the Mozilla Foundation.

The listening session resulted in a list of six “Principles for Enhancing Competition and Tech Platform Accountability”:

Promote competition in the technology sector. The American information technology sector has long been an engine of innovation and growth, and the U.S. has led the world in development of the Internet economy. Today, however, a small number of dominant Internet platforms use their power to exclude market entrants, to engage in rent-seeking, and to gather intimate personal information that they can use for their own advantage.

We need clear rules of the road to ensure small and mid-size businesses and entrepreneurs can compete on a level playing field, which will promote innovation for American consumers and ensure continued U.S. leadership in global technology. We are encouraged to see bipartisan interest in Congress in passing legislation to address the power of tech platforms through antitrust legislation.

Provide robust federal protections for Americans’ privacy: There should be clear limits on the ability to collect, use, transfer, and maintain our personal data, including limits on targeted advertising. These limits should put the burden on platforms to minimize how much information they collect, rather than burdening Americans with reading fine print. We especially need strong protections for particularly sensitive data such as geolocation and health information, including information related to reproductive health. We are encouraged to see bipartisan interest in Congress in passing legislation to protect privacy.

Protect our kids by putting in place even stronger privacy and online protections for them, including prioritizing safety by design standards and practices for online platforms, products, and services. Children, adolescents, and teens are especially vulnerable to harm. Platforms and other interactive digital service providers should be required to prioritize the safety and wellbeing of young people above profit and revenue in their product design, including by restricting excessive data collection and targeted advertising to young people.

Remove special legal protections for large tech platforms. Tech platforms currently have special legal protections under Section 230 of the Communications Decency Act that broadly shield them from liability even when they host or disseminate illegal, violent conduct, or materials. The President has long called for fundamental reforms to Section 230.

Increase transparency about platform’s algorithms and content moderation decisions. Despite their central role in American life, tech platforms are notoriously opaque. Their decisions about what content to display to a given user and when and how to remove content from their sites affect Americans’ lives and and American society in profound ways. However, platforms are failing to provide sufficient transparency to allow the public and researchers to understand how and why such decisions are made, their potential effects on users, and the very real dangers these decisions may pose.

Stop discriminatory algorithmic decision-making. We need strong protections to ensure algorithms do not discriminate against protected groups, such as by failing to share key opportunities equally, by discriminatorily exposing vulnerable communities to risky products, or through persistent surveillance.

The part that I think it going to upset the big social media companies the most is the bit about Section 230. Investopedia describes it as: “a provision of federal law that protects internet web hosts and users from legal liability for online information provided by third parties. In addition, the law protects web hosts from liability for voluntarily and in good faith editing or restricting access to objectionable material, even if the material is constitutionally protected.”

It is unclear to me if President Biden is interested in having Congress make legislation of the “six principals” – or if he will sign it. What I’m certain of is that this is likely going to make a whole lot of people talk about Section 230 on social media.


Senators Introduced a Bill to Limit Section 230 Protections



Senators Mark Warner, Mazie Hirono, and Amy Klobuchar introduced a bill called the SAFE TECH Act. The full name of the Act is “Safeguarding Against Fraud, Exploitation, Threats, Extremism, and Consumer Harms Act”. All three of the Senators who introduced the bill are from the Democratic Party. It appears that no Republican Senators took part in this bill.

The purpose of the SAFE TECH ACT is “to amend section 230 of the Communications Act of 1934 to reaffirm civil rights, victims’ rights, and consumer protections.” The bill has not yet been voted on by the full Senate.

Senator Mark Warner said, in a statement: “When Section 230 was enacted in 1996, the internet looked very different than it does today. A law meant to encourage service providers to develop tools and policies to support effective moderation has instead conferred sweeping immunity on online service providers even when they do nothing to address foreseeable, obvious, and repeated misuse of their products and services to cause harm.”

The SAFE TECH Act would make clear that Section 230:

Doesn’t apply to ads or other paid content – ensuring that platforms cannot continue to profit as their services are used to target vulnerable consumers with ads enabling frauds and scams;

Doesn’t bar injunctive relief – allowing victims to seek court orders where misuse of a provider’s services is likely to cause irreparable harm;

Doesn’t impair enforcement of civil rights laws – maintaining the vital and hard-fought protections from discrimination even when activities or services are mediated by internet platforms.

Doesn’t interfere with laws that address stalking/cyber-stalking or harassment and intimidation on the basis of protected classes – ensuring that victims of abuse and targeted harassment can hold platforms accountable when they directly enable harmful activity;

Doesn’t bar wrongful death actions – allowing family of a decedent to bring suit against platforms where they may have directly contributed to a loss of life;

Doesn’t bar suits under the Allen Tort Claims Act – potentially allowing victims of platform-enabled human rights violations abroad (like the survivors of the Rohingya genocide) to seek redress in U.S. courts against U.S. – based platforms.

Gizmodo reported that the SAFE TECH Act was widely endorsed upon announcement by several groups working to curb hate and extremism online, including the Anti-Defamation League and the Center for Countering Digital Hate. The Hill reported that that the NAACP Legal Defense also supported the bill.

Fight for the Future posted a link to a Google Doc that shows a letter which includes a long list of groups that are against the SAFE TECH Act. In a tweet Fight For the Future stated that 70+ human rights groups have sent the letter to Congress and the Biden-Harris Administration warning lawmakers against gutting Section 230.

Techdirt has a long and very detailed post about the SAFE TECH Act. They are very clearly against it.

One paragraph says: “A key thing to recognize is that it’s obvious that the drafters of this bill believe the myth that 230 protects “big” tech companies. The bill is written as if its only talking about Facebook, YouTube, and Twitter. Warner handwaves away the idea that the bill would destroy smaller companies in his announcement by ridiculously (and against all evidence to the contrary) saying that all startups are too small to sue, so it would only be used against larger companies.”

Personally, I believe that more should be done to prevent people from being harassed or cyber-stalked online. Social media platforms should be required to do more to uphold civil rights laws. Based on everything I’ve read, it does not appear that the SAFE TECH Act is the solution to these problems.