Tag Archives: Social Media

Court Upholds Texas Social-Media Law On Web Censorship



A federal court upheld the validity of a Texas social media law that companies like Meta Platforms Inc. and Twitter Inc. say will prevent them from blocking hate speech and extremism, Bloomberg reported. The 5th Circuit Court of Appeals in New Orleans on Friday lifted a lower court injunction that had blocked the legislation from taking effect.

According to Bloomberg, the Texas law bars social media platforms with more than 50 million users from discriminating on the basis of viewpoint. Texas Governor Greg Abbott and other Republicans argue the legislation is needed to protect conservative voices from being silenced. But tech groups say the measure unconstitutionally bars platforms from removing neo-Nazi and Ku Klux Klan screeds or Russian propaganda about its invasion of Ukraine.

The majority opinion was written by Judge Andrew Oldham, who was nominated to the bench by President Donald Trump. Judge Edith Jones, a nominee of President Ronald Regan, agreed with Oldham. Judge Leslie Southwick, a nominee of President George W. Bush, partly dissented with the majority.

Bloomberg also reported that critics of the law said it will wreak havoc on social media platforms by removing their ability to moderate and remove content that falls outside user guidelines. It would also allow Texas residents to sue platforms if posts are removed by claiming that their content is being censored.

The Washington Post reported the U.S. Court of Appeals for the 5th Circuit upheld a controversial Texas social media law that bars companies from removing posts based on a person’s political ideology, overturning a lower court’s decision to block the law and likely setting up a Supreme Court showdown of the future of online speech.

According to The Washington Post, the ruling could have wide-ranging effects on the future of tech regulation, giving fresh ammunition to conservative politicians who have alleged that major tech companies are silencing their political speech. The Washington Post also reported that the decision diverges from precedent and recent rulings from the 11th Circuit and other courts, and tech industry groups are likely to appeal.

An appeal of the decision, The Washington Post wrote, could force the Supreme Court, where conservatives have a majority, to weigh in on internet regulation, which has become an increasingly politicized issue since the 2016 election. Liberals have called for new limits on the companies that would block the proliferation of harmful content and misinformation on the platforms, and conservatives have argued that the companies have gone too far in policing their sites, especially after the companies’ 2021 decision to ban Trump following the January 6 attacks on the Capitol.

Politico reported that NetChoice Vice President and General Counsel Carl Szabo said in a statement that his organization plans to appeal: “We remain convinced that when the U.S. Supreme Court hears one of our cases, it will uphold the First Amendment rights of websites, platforms, and apps.

According to Politico, CCIA President Matt Schruers said, “We strongly disagree with the court’s decision. Forcing private companies to give equal treatment to all viewpoints on their platforms places foreign propaganda and extremism on equal footing with decent Internet users, and places Americans at risk.”

Personally, I think the court’s decision is going to immediately result in the meanest people on social media ramping up posts in which they spread misinformation about minorities and trans people. Now is an excellent time to make your social media accounts private.


California Governor Signs Bill Protecting Children’s Online Data And Privacy



California Governor Newsom announced that he has signed bipartisan landmark legislation aimed at protecting the wellbeing, data, and privacy of children using online platforms.

AB 2273 by Assemblymember Buffy Wicks (D-Oakland) and Assemblymember Jordan Cunningham (R-San Luis Obispo), establishes the California Age-Appropriate Design Code Act, which requires online platforms to consider the best interest of child users and to default to privacy and safety settings that protect children’s mental and physical health and wellbeing.

AB 2273 prohibits companies that provide online services, products or features likely to be accessed by children from using a child’s personal information; collecting, selling, or retaining a child’s geolocation; profiling a child by default; and leading or encouraging children to provide personal information.

The bill also requires privacy information, terms of service, policies, and community standards be easily accessible and upheld – and requires responsive tools to help children exercise their privacy rights. This bipartisan legislation strikes a balance that protects kids, and ensure that technology companies will have clear rules of the road that will allow them to continue to innovate.

The Children’s Data Protection Working Group will be established as part of the California Age-Appropriate Design Code Act to deliver a report to the Legislature, by January 2024, on the best practices for implementation.

AB 2273 requires businesses with an online presence to complete a Data Protection Impact Assessment before offering new online services, products, or features likely to be accessed by children.

Provided to the California Attorney General, the Data Protection Impact Assessments must identify the purpose of the online service, product, or feature, how it uses children’s personal information, and the risks of material detriment to children that arise from the data management practices.


The New York Times reported that despite opposition from the tech industry, the State Legislature unanimously approved the bill at the end of August. It is the first state statute in the nation requiring online services likely to be used by youngsters to install wide-ranging safeguards for users under 18.

According to The New York Times, the measure will require sites and apps to curb the risks that certain popular features – like allowing strangers to message one another – may pose to younger users. It will also require online services to turn on the highest privacy settings by default for children.

The New York Times also reported that the California measure could apply to a wide range of popular digital products that people under 18 are likely to use: social networks, game platforms, connected toys, voice assistants and digital learning tools for schools. It could also affect children far beyond the state, prompting some services to introduce changes nationwide, rather than treat minors in California differently.

Personally, I think that California’s AB 2273 is a great idea! I believe that every parent wants to make sure that their children will be safe when engaging in online video games, social networks, and other things that kids tend to like. It will be even better when these protections are established nationwide, to provide protection for all children in the United States.


Governor Newsom Signs Social Media Transparency Measure



California Governor Gavin Newsom announced that he has signed a first-of-its kind social media transparency measure to protect Californians from hate and disinformation spread online. Bill 587 was proposed by Assemblymember Jesse Gabriel (D – Encino) and is called “Social media companies: terms of service”. The law requires social media companies to report data on their enforcement of the policies.

Obviously, this bill, which has been signed into law by Governor Newsom, provides protection to people who live in California. It does not to cover people who do not live in California.

This is, in some ways, similar to the California Consumer Privacy Act (CCPA) which became law in 2018. It gave Californians the right to know about the personal information a business collects about them and how it is used and shared; the right to delete personal information collected from them (with some exceptions); the right to opt-out of the sale of their personal information; and the right to non-discrimination for exercising their CCPA rights.

“California will not stand by as social media is weaponized to spread hate and disinformation that threaten our communities and foundational values as a country,” said Governor Newsom. “Californians deserve to know how these platforms are impacting our public discourse, and this brings much-needed transparency and accountability to the policies that shape the social media content we consume every day. I thank Assemblymember Gabriel for championing this important measure to protect Californias from hate, harassment and lies spread online.”

The Verge reported that Governor Newsom signed a law aimed at making web platforms monitor hate speech, extremism, harassment, and other objectionable behaviors. The Governor signed it after it passed the state legislature last month, despite concerns that the bill might violate First Amendment speech protections.

According to The Verge, AB 587 requires social media companies to post their terms of service online, as well as submit a twice-yearly report to the California Attorney General. The report must include details about whether the platform defines and moderates several categories of content including “hate speech or racism,” “extremism or radicalization,” “disinformation or misinformation,” “harassment,” and “foreign political interference.”

The law also requires social media companies to offer details about automated content moderation, how many times people viewed content that was flagged for removal and how the content was handled. AB 587 fits well with AB 2273, which is intended to tighten regulations for children’s social media use.

Personally, I think that AB 587 is a great idea. It might be exactly the push that social media companies need in order for them to actually remove hate speech, racism, extremism, misinformation, and everything else the bill requires. It would be great if social media companies removed the accounts of people who are posting threats of violence and/or engaging in harassment on their platform.

I remember when Twitter was brand new, and we all had less characters to use to say something. Back then, it was easy to find like-minded people who were also on Twitter. (For me, it was mostly fellow podcasters). I’d love to see Twitter go back to the good old days.


White House Creates Guiding Principles For Big Tech Platforms



The White House held a “Listening Session On Tech Platform Accountability”. A varied group of people were invited, including Assistants to the President of various parts of the federal government, some people involved in civil rights causes, Chief Executive Officer of Sonos, Patrick Spence, and Mitchell Baker, CEO of the Mozilla Corporation and Chairwoman of the Mozilla Foundation.

The listening session resulted in a list of six “Principles for Enhancing Competition and Tech Platform Accountability”:

Promote competition in the technology sector. The American information technology sector has long been an engine of innovation and growth, and the U.S. has led the world in development of the Internet economy. Today, however, a small number of dominant Internet platforms use their power to exclude market entrants, to engage in rent-seeking, and to gather intimate personal information that they can use for their own advantage.

We need clear rules of the road to ensure small and mid-size businesses and entrepreneurs can compete on a level playing field, which will promote innovation for American consumers and ensure continued U.S. leadership in global technology. We are encouraged to see bipartisan interest in Congress in passing legislation to address the power of tech platforms through antitrust legislation.

Provide robust federal protections for Americans’ privacy: There should be clear limits on the ability to collect, use, transfer, and maintain our personal data, including limits on targeted advertising. These limits should put the burden on platforms to minimize how much information they collect, rather than burdening Americans with reading fine print. We especially need strong protections for particularly sensitive data such as geolocation and health information, including information related to reproductive health. We are encouraged to see bipartisan interest in Congress in passing legislation to protect privacy.

Protect our kids by putting in place even stronger privacy and online protections for them, including prioritizing safety by design standards and practices for online platforms, products, and services. Children, adolescents, and teens are especially vulnerable to harm. Platforms and other interactive digital service providers should be required to prioritize the safety and wellbeing of young people above profit and revenue in their product design, including by restricting excessive data collection and targeted advertising to young people.

Remove special legal protections for large tech platforms. Tech platforms currently have special legal protections under Section 230 of the Communications Decency Act that broadly shield them from liability even when they host or disseminate illegal, violent conduct, or materials. The President has long called for fundamental reforms to Section 230.

Increase transparency about platform’s algorithms and content moderation decisions. Despite their central role in American life, tech platforms are notoriously opaque. Their decisions about what content to display to a given user and when and how to remove content from their sites affect Americans’ lives and and American society in profound ways. However, platforms are failing to provide sufficient transparency to allow the public and researchers to understand how and why such decisions are made, their potential effects on users, and the very real dangers these decisions may pose.

Stop discriminatory algorithmic decision-making. We need strong protections to ensure algorithms do not discriminate against protected groups, such as by failing to share key opportunities equally, by discriminatorily exposing vulnerable communities to risky products, or through persistent surveillance.

The part that I think it going to upset the big social media companies the most is the bit about Section 230. Investopedia describes it as: “a provision of federal law that protects internet web hosts and users from legal liability for online information provided by third parties. In addition, the law protects web hosts from liability for voluntarily and in good faith editing or restricting access to objectionable material, even if the material is constitutionally protected.”

It is unclear to me if President Biden is interested in having Congress make legislation of the “six principals” – or if he will sign it. What I’m certain of is that this is likely going to make a whole lot of people talk about Section 230 on social media.


Social Media Companies Killed A California Bill To Protect Kids



California lawmakers killed a bill Thursday that would have allowed government lawyers to sue social-media companies for features that allegedly harm children by causing them to become addicted, The Wall Street Journal reported.

According to The Wall Street Journal, the measure would have given the attorney general, local district attorneys and city attorneys in the biggest California cities authority to try to hold social-media companies liable in court for features that knew or should have known could addict minors. Among those targeted could have been Facebook and Instagram parent Meta Platforms, Inc., Snapchat parent Snap Inc., and TikTok, owned by Chinese company ByteDance Ltd.

In June of 2022, Meta (parent company of Facebook and Instagram) was facing eight lawsuits filed in courthouses across the US that allege that excessive exposure to platforms including Facebook and Instagram has led to attempted or actual suicides, eating disorders and sleeplessness, among other issues. More specifically, the lawsuits claim that the company built algorithms into its platforms that lure young people into destructive behavior.

The Wall Street Journal also reported that the bill died in the appropriations committee of the California state senate through a process known as the suspense file, in which lawmakers can halt the progress of dozens or even hundreds of potentially controversial bills without a public vote, based on their possible fiscal impact.

The death of the bill comes after social media companies worked aggressively to stop the bill, arguing that it would lead to hundreds of millions of dollars in liability and potentially prompt them to abandon the youth market nationwide. Meta, Twitter Inc., and Snap all had individually lobbied against the measure according to state lobbying disclosures.

This doesn’t mean that a similar bill cannot be passed by the federal government. Politico reported earlier this month that the Commerce Committee advanced the floor considerations for two bills: It approved the Children and Teens’ Online Privacy Protection Act on a voice vote and the Kids Online Safety Act by a unanimous 28-0.

According to Politico, The Kids Online Safety Act was co-sponsored by Richard Blumenthal (Democrat – Connecticut) and Marsha Blackburn (Republican – Tennessee). That bill, if passed, would require social media platforms to allow kids and their parents to opt out of content algorithms that have fed them harmful content and disable addictive product features.

The Children and Teens’ Online Privacy Protection Act was sponsored by Bill Cassidy (Republican – Louisiana) and Ed Markey (Democrat – Massachusetts). That bill, if passed, would extend existing privacy protections for preteens to children up to age 16 and bans ads from targeting them. It would also give kids and their parents the right to delete information that online platforms have about them.

Personally, I think that parents of children and teenagers who have allowed their kids to use social media should have complete control over preventing the social media companies from gathering data on their children. Huge social media companies need to find other ways of sustaining revenue that doesn’t involved mining underage people in the hopes of gaining money from ads.


Tech Industry Appeals Texas Social Media Law



Two Washington-based groups representing Google, Facebook, and other tech giants filed an emergency application with the Supreme Court on Friday, seeking to block a Texas law that bars social media companies from removing posts based on a user’s political ideology, The Washington Post reported.

According to The Washington Post, the Texas law took effect Wednesday after the U.S. Court of Appeals for the 5th Circuit in New Orleans lifted a district court injunction that had barred it. The appeals court action shocked the industry, which has been largely successful in batting back Republican state leaders’ efforts to regulate social media companies’ content-moderation policies.

NetChoice posted information titled: “NetChoice Announces Request for Emergency Stay from the U.S. Supreme Court”. From the information:

…On May 13, 2022, NetChoice and CCIA filed an application for an emergency stay with Justice Alito of the Supreme Court. Under Court procedures, Justice Alito may rule unilaterally or refer the matter to the full Court for consideration…

“The divided panel’s shocking decision to greenlight an unconstitutional law – without explanation – demanded the extraordinary response of seeking emergency Supreme Court intervention,” said Chris Marchese, Counsel for NetChoice.

“Texas HB 20 strips private online businesses of their speech rights, forbids them from making constitutionally protected editorial decisions, and forces them to publish and promote objectionable content,” continued Marchese. “The First Amendment prohibits Texas from forcing online platforms to host and promote foreign propaganda, pornography, pro-Nazi speech, and spam.”…

…”We are hopeful the Supreme Court will quickly reverse the Fifth Circuit, and we remain confident that the law will ultimately be struck down as unconstitutional.”

The Computer & Communications Industry Association (CCIA) posted news titled: “CIAA Files Emergency Brief Asking Supreme Court To Halt Texas Social Media Law”. From the news:

“The Computer & Communications Industry Association jointly filed an emergency brief Friday asking the U.S. Supreme Court for immediate action to prevent an unconstitutional Texas social media law from going into affect. The joint filing, submitted with co-plaintiff NetChoice, asks the Court to reinstate a lower court’s decision blocking the enforcement of the Texas statute while it is being reviewed under the First Amendment…

…CCIA has advocated for free speech online for more than 25 years. This effort has included protecting the First Amendment right for citizens and businesses to exercise both the right to speak and not be compelled to speak online.

The Verge reported that NetChoice had previously won a similar case in Florida last year, making the constitutional issues in this case even more pressing to address.

According to The Verge, the three-judge panel on the Fifth Circuit appeared to be confused about many of the basic terms being used – one judge seemed to think that Twitter was not a website, and another seemed to think there was no difference between a phone company like Verizon and a social media company like Twitter or Facebook.

It is not unheard of for a court to pick a side to support when presented with a case. Personally, I do not have any faith at all in the decision making process of the Supreme Court as it stands today.


Social Media Sites May Face Criminal Sanctions from UK Law



CNBC reported that the UK Government has announced that executives of social media companies (like Meta, Google, Twitter, and TikTok) could now face prosecution or jail time within two months of the new Online Safety Bill becoming law, instead of two years as it was previously drafted.

According to CNBC, The Online Safety Bill aims to make it mandatory for social media services, search engines and other platforms that allow people to share their own content to protect children, tackle illegal activity and uphold their stated terms and conditions.

Axios reported that the British Government said in a statement that the goal of the bill is to “protect children, public safety, and safeguard free speech.” It imposes new rules on tech companies and adds oversight powers to Ofcom, the British communications regulator, while exempting news content from any new restrictions.

What is in The Online Safety Bill?

BBC reported (in December of 2021) the there were three things the bill set out to do:

Prevent the spread of illegal content and activity such as images of child abuse, terrorist material and hate crimes, including racist abuse

Protect children from harmful material

Protect adults from legal – but harmful – content

According to the BBC, at that time, social media companies that fail to comply with the new rules could face fines of up to £18m, or 10% of their annual global turnover.

Axios reported that more changes had been added:

Ensuring websites that publish or host pornography, including commercial sites, require that users are 18 years old or older

Adding new measures that give people more control over who can contact them and what they see online, as part of an effort to limit the reach of anonymous trolls

Requiring tech companies to act more quickly against a wide range of illegal content

Making it a crime to flash someone online

The bill must go through a former process before it can become an act. CNBC noted that the process includes giving UK lawmakers the chance to debate aspects within the legislation.

To me, it sounds like a group of UK politicians have created a bill that could (potentially) make themselves the decision makers about what is allowed, and what is not allowed, on social media. My biggest fear is that The Online Safety Bill will be used by the meanest people to squash the posts from people who are minorities, LGBTQ+, or politicians who are on the opposing side of the aisle.