Tag Archives: Social Media

Governor Newsom Signs Social Media Transparency Measure



California Governor Gavin Newsom announced that he has signed a first-of-its kind social media transparency measure to protect Californians from hate and disinformation spread online. Bill 587 was proposed by Assemblymember Jesse Gabriel (D – Encino) and is called “Social media companies: terms of service”. The law requires social media companies to report data on their enforcement of the policies.

Obviously, this bill, which has been signed into law by Governor Newsom, provides protection to people who live in California. It does not to cover people who do not live in California.

This is, in some ways, similar to the California Consumer Privacy Act (CCPA) which became law in 2018. It gave Californians the right to know about the personal information a business collects about them and how it is used and shared; the right to delete personal information collected from them (with some exceptions); the right to opt-out of the sale of their personal information; and the right to non-discrimination for exercising their CCPA rights.

“California will not stand by as social media is weaponized to spread hate and disinformation that threaten our communities and foundational values as a country,” said Governor Newsom. “Californians deserve to know how these platforms are impacting our public discourse, and this brings much-needed transparency and accountability to the policies that shape the social media content we consume every day. I thank Assemblymember Gabriel for championing this important measure to protect Californias from hate, harassment and lies spread online.”

The Verge reported that Governor Newsom signed a law aimed at making web platforms monitor hate speech, extremism, harassment, and other objectionable behaviors. The Governor signed it after it passed the state legislature last month, despite concerns that the bill might violate First Amendment speech protections.

According to The Verge, AB 587 requires social media companies to post their terms of service online, as well as submit a twice-yearly report to the California Attorney General. The report must include details about whether the platform defines and moderates several categories of content including “hate speech or racism,” “extremism or radicalization,” “disinformation or misinformation,” “harassment,” and “foreign political interference.”

The law also requires social media companies to offer details about automated content moderation, how many times people viewed content that was flagged for removal and how the content was handled. AB 587 fits well with AB 2273, which is intended to tighten regulations for children’s social media use.

Personally, I think that AB 587 is a great idea. It might be exactly the push that social media companies need in order for them to actually remove hate speech, racism, extremism, misinformation, and everything else the bill requires. It would be great if social media companies removed the accounts of people who are posting threats of violence and/or engaging in harassment on their platform.

I remember when Twitter was brand new, and we all had less characters to use to say something. Back then, it was easy to find like-minded people who were also on Twitter. (For me, it was mostly fellow podcasters). I’d love to see Twitter go back to the good old days.


White House Creates Guiding Principles For Big Tech Platforms



The White House held a “Listening Session On Tech Platform Accountability”. A varied group of people were invited, including Assistants to the President of various parts of the federal government, some people involved in civil rights causes, Chief Executive Officer of Sonos, Patrick Spence, and Mitchell Baker, CEO of the Mozilla Corporation and Chairwoman of the Mozilla Foundation.

The listening session resulted in a list of six “Principles for Enhancing Competition and Tech Platform Accountability”:

Promote competition in the technology sector. The American information technology sector has long been an engine of innovation and growth, and the U.S. has led the world in development of the Internet economy. Today, however, a small number of dominant Internet platforms use their power to exclude market entrants, to engage in rent-seeking, and to gather intimate personal information that they can use for their own advantage.

We need clear rules of the road to ensure small and mid-size businesses and entrepreneurs can compete on a level playing field, which will promote innovation for American consumers and ensure continued U.S. leadership in global technology. We are encouraged to see bipartisan interest in Congress in passing legislation to address the power of tech platforms through antitrust legislation.

Provide robust federal protections for Americans’ privacy: There should be clear limits on the ability to collect, use, transfer, and maintain our personal data, including limits on targeted advertising. These limits should put the burden on platforms to minimize how much information they collect, rather than burdening Americans with reading fine print. We especially need strong protections for particularly sensitive data such as geolocation and health information, including information related to reproductive health. We are encouraged to see bipartisan interest in Congress in passing legislation to protect privacy.

Protect our kids by putting in place even stronger privacy and online protections for them, including prioritizing safety by design standards and practices for online platforms, products, and services. Children, adolescents, and teens are especially vulnerable to harm. Platforms and other interactive digital service providers should be required to prioritize the safety and wellbeing of young people above profit and revenue in their product design, including by restricting excessive data collection and targeted advertising to young people.

Remove special legal protections for large tech platforms. Tech platforms currently have special legal protections under Section 230 of the Communications Decency Act that broadly shield them from liability even when they host or disseminate illegal, violent conduct, or materials. The President has long called for fundamental reforms to Section 230.

Increase transparency about platform’s algorithms and content moderation decisions. Despite their central role in American life, tech platforms are notoriously opaque. Their decisions about what content to display to a given user and when and how to remove content from their sites affect Americans’ lives and and American society in profound ways. However, platforms are failing to provide sufficient transparency to allow the public and researchers to understand how and why such decisions are made, their potential effects on users, and the very real dangers these decisions may pose.

Stop discriminatory algorithmic decision-making. We need strong protections to ensure algorithms do not discriminate against protected groups, such as by failing to share key opportunities equally, by discriminatorily exposing vulnerable communities to risky products, or through persistent surveillance.

The part that I think it going to upset the big social media companies the most is the bit about Section 230. Investopedia describes it as: “a provision of federal law that protects internet web hosts and users from legal liability for online information provided by third parties. In addition, the law protects web hosts from liability for voluntarily and in good faith editing or restricting access to objectionable material, even if the material is constitutionally protected.”

It is unclear to me if President Biden is interested in having Congress make legislation of the “six principals” – or if he will sign it. What I’m certain of is that this is likely going to make a whole lot of people talk about Section 230 on social media.


Social Media Companies Killed A California Bill To Protect Kids



California lawmakers killed a bill Thursday that would have allowed government lawyers to sue social-media companies for features that allegedly harm children by causing them to become addicted, The Wall Street Journal reported.

According to The Wall Street Journal, the measure would have given the attorney general, local district attorneys and city attorneys in the biggest California cities authority to try to hold social-media companies liable in court for features that knew or should have known could addict minors. Among those targeted could have been Facebook and Instagram parent Meta Platforms, Inc., Snapchat parent Snap Inc., and TikTok, owned by Chinese company ByteDance Ltd.

In June of 2022, Meta (parent company of Facebook and Instagram) was facing eight lawsuits filed in courthouses across the US that allege that excessive exposure to platforms including Facebook and Instagram has led to attempted or actual suicides, eating disorders and sleeplessness, among other issues. More specifically, the lawsuits claim that the company built algorithms into its platforms that lure young people into destructive behavior.

The Wall Street Journal also reported that the bill died in the appropriations committee of the California state senate through a process known as the suspense file, in which lawmakers can halt the progress of dozens or even hundreds of potentially controversial bills without a public vote, based on their possible fiscal impact.

The death of the bill comes after social media companies worked aggressively to stop the bill, arguing that it would lead to hundreds of millions of dollars in liability and potentially prompt them to abandon the youth market nationwide. Meta, Twitter Inc., and Snap all had individually lobbied against the measure according to state lobbying disclosures.

This doesn’t mean that a similar bill cannot be passed by the federal government. Politico reported earlier this month that the Commerce Committee advanced the floor considerations for two bills: It approved the Children and Teens’ Online Privacy Protection Act on a voice vote and the Kids Online Safety Act by a unanimous 28-0.

According to Politico, The Kids Online Safety Act was co-sponsored by Richard Blumenthal (Democrat – Connecticut) and Marsha Blackburn (Republican – Tennessee). That bill, if passed, would require social media platforms to allow kids and their parents to opt out of content algorithms that have fed them harmful content and disable addictive product features.

The Children and Teens’ Online Privacy Protection Act was sponsored by Bill Cassidy (Republican – Louisiana) and Ed Markey (Democrat – Massachusetts). That bill, if passed, would extend existing privacy protections for preteens to children up to age 16 and bans ads from targeting them. It would also give kids and their parents the right to delete information that online platforms have about them.

Personally, I think that parents of children and teenagers who have allowed their kids to use social media should have complete control over preventing the social media companies from gathering data on their children. Huge social media companies need to find other ways of sustaining revenue that doesn’t involved mining underage people in the hopes of gaining money from ads.


Tech Industry Appeals Texas Social Media Law



Two Washington-based groups representing Google, Facebook, and other tech giants filed an emergency application with the Supreme Court on Friday, seeking to block a Texas law that bars social media companies from removing posts based on a user’s political ideology, The Washington Post reported.

According to The Washington Post, the Texas law took effect Wednesday after the U.S. Court of Appeals for the 5th Circuit in New Orleans lifted a district court injunction that had barred it. The appeals court action shocked the industry, which has been largely successful in batting back Republican state leaders’ efforts to regulate social media companies’ content-moderation policies.

NetChoice posted information titled: “NetChoice Announces Request for Emergency Stay from the U.S. Supreme Court”. From the information:

…On May 13, 2022, NetChoice and CCIA filed an application for an emergency stay with Justice Alito of the Supreme Court. Under Court procedures, Justice Alito may rule unilaterally or refer the matter to the full Court for consideration…

“The divided panel’s shocking decision to greenlight an unconstitutional law – without explanation – demanded the extraordinary response of seeking emergency Supreme Court intervention,” said Chris Marchese, Counsel for NetChoice.

“Texas HB 20 strips private online businesses of their speech rights, forbids them from making constitutionally protected editorial decisions, and forces them to publish and promote objectionable content,” continued Marchese. “The First Amendment prohibits Texas from forcing online platforms to host and promote foreign propaganda, pornography, pro-Nazi speech, and spam.”…

…”We are hopeful the Supreme Court will quickly reverse the Fifth Circuit, and we remain confident that the law will ultimately be struck down as unconstitutional.”

The Computer & Communications Industry Association (CCIA) posted news titled: “CIAA Files Emergency Brief Asking Supreme Court To Halt Texas Social Media Law”. From the news:

“The Computer & Communications Industry Association jointly filed an emergency brief Friday asking the U.S. Supreme Court for immediate action to prevent an unconstitutional Texas social media law from going into affect. The joint filing, submitted with co-plaintiff NetChoice, asks the Court to reinstate a lower court’s decision blocking the enforcement of the Texas statute while it is being reviewed under the First Amendment…

…CCIA has advocated for free speech online for more than 25 years. This effort has included protecting the First Amendment right for citizens and businesses to exercise both the right to speak and not be compelled to speak online.

The Verge reported that NetChoice had previously won a similar case in Florida last year, making the constitutional issues in this case even more pressing to address.

According to The Verge, the three-judge panel on the Fifth Circuit appeared to be confused about many of the basic terms being used – one judge seemed to think that Twitter was not a website, and another seemed to think there was no difference between a phone company like Verizon and a social media company like Twitter or Facebook.

It is not unheard of for a court to pick a side to support when presented with a case. Personally, I do not have any faith at all in the decision making process of the Supreme Court as it stands today.


Social Media Sites May Face Criminal Sanctions from UK Law



CNBC reported that the UK Government has announced that executives of social media companies (like Meta, Google, Twitter, and TikTok) could now face prosecution or jail time within two months of the new Online Safety Bill becoming law, instead of two years as it was previously drafted.

According to CNBC, The Online Safety Bill aims to make it mandatory for social media services, search engines and other platforms that allow people to share their own content to protect children, tackle illegal activity and uphold their stated terms and conditions.

Axios reported that the British Government said in a statement that the goal of the bill is to “protect children, public safety, and safeguard free speech.” It imposes new rules on tech companies and adds oversight powers to Ofcom, the British communications regulator, while exempting news content from any new restrictions.

What is in The Online Safety Bill?

BBC reported (in December of 2021) the there were three things the bill set out to do:

Prevent the spread of illegal content and activity such as images of child abuse, terrorist material and hate crimes, including racist abuse

Protect children from harmful material

Protect adults from legal – but harmful – content

According to the BBC, at that time, social media companies that fail to comply with the new rules could face fines of up to £18m, or 10% of their annual global turnover.

Axios reported that more changes had been added:

Ensuring websites that publish or host pornography, including commercial sites, require that users are 18 years old or older

Adding new measures that give people more control over who can contact them and what they see online, as part of an effort to limit the reach of anonymous trolls

Requiring tech companies to act more quickly against a wide range of illegal content

Making it a crime to flash someone online

The bill must go through a former process before it can become an act. CNBC noted that the process includes giving UK lawmakers the chance to debate aspects within the legislation.

To me, it sounds like a group of UK politicians have created a bill that could (potentially) make themselves the decision makers about what is allowed, and what is not allowed, on social media. My biggest fear is that The Online Safety Bill will be used by the meanest people to squash the posts from people who are minorities, LGBTQ+, or politicians who are on the opposing side of the aisle.


Texas Sued Over Law That Stops Social Media Sites from Banning Users



The State of Texas has been sued over its new law that prevents social media platforms from banning users over their political views, The Texas Tribune reported.

The Texas bill is called HB 20. Governor Greg Abbott signed it into law. According to The Texas Tribune, the law states that “social media platforms with over 50 million monthly users in the U.S. – a threshold that includes Twitter, Facebook, Instagram and YouTube – must publicly report details about content removal and account suspensions biannually. The platforms are also required to establish an easily accessible complaint system, where users could flag violations of the law.”

The lawsuit was filed by NetChoice, LLC and Computer & Communications Industry Association, which represent Google and Twitter in the lawsuit. It was filed against Texas Governor Ken Paxton (in his official capacity as Attorney General of Texas). The case was filed in the United States District Court for the Western District of Texas Austin Division.

Here is a key point from the lawsuit:

…The Commerce Clause does not permit a single state to dictate the rules of content for the global Internet. H.B. 20 would regulate wholly-out-of-state conduct – balkanizing the Internet by imposing onerous extraterritorial regulation on the operation of covered social media platforms. This vastly exceeds Texas’s regulatory purview and will impede commerce across the Internet…

USA Today described this Texas law as a “social media censorship law”. According to USA Today, “Texas lawmakers were motivated in large part by the suspensions of former President Donald Trump after the Jan. 6 attack on the Capitol”.

Personally, I don’t think this Texas law stands much of a chance in court. USA Today reported that a federal judge blocked a similar Florida law in June, one day before it could take effect.


Trump Plans to Start his Own Social Media Platform



A spokesman for Donald Trump announced on Fox News “#MediaBuzz” that Trump will be returning to social media with his own platform. It appears that the platform will be released in two or three months. No further information has been released other than that the new platform “is going to be big”.

Personally, I’m not surprised that Trump wants to make his own social media platform. As you may remember, he was permanently suspended from Twitter on January 8, 2021, days after the riot at the U.S. Capitol. In a blog post, Twitter stated the reason for the permanent suspension was “due to the risk of further incitement of violence.” Facebook also suspended Trump’s account for roughly the same reason.

It is possible that Trump (and whomever is helping to create his new social media platform) believe that he would be safe there to post whatever her wants to. I cannot imagine what his own platform would consider egregious enough for them to decide to suspend Trump’s account.

The trick is finding a web-hosting company that will accept Trump’s new social media platform. Gab, (another right-wing social media platform), has a history of having its web hosting company drop them. A quick look at Gab’s Wikipedia page lists that Apple declined Gab’s submission of its app to the App Store in 2016. Google removed Gab’s app from its Play Store for violating policy against hate speech.

In 2018, PayPal, GoDaddy, and Medium terminated their relationship with Gab one day after the the Pittsburg synagogue shooting (and after posts by the shooter were found on Gab). Later that day, Gab’s hosting provider, Joylent, gave them a limited time to move out before it terminated service. In 2018, Gab started using Epik as a domain registrar, and may potentially be using Cloudflare (a company that provides content delivery and DDoS mitigation). In 2019, Amazon Web Services ceased Gab’s fundraising site due to Amazon’s policy on hateful conduct.

Parler (another right-wing social media platform) has also faced difficulties. BuzzFeed News reported in January of 2021 that Amazon suspended Parler from Amazon Web Services. The reason for the suspension was because Amazon became unconvinced that Parler could effectively moderate calls for violence and hate speech.

NPR reported that Parler sued Amazon and asked a federal judge to force Amazon to restore Parler’s web-hosting service. The judge declined to do so. Engadget later reported that Parler found web-hosting with Epik – the same company that hosts Gab.

It might be possible for Trump to launch his own social media platform. If he does, I suspect it would pull like-minded users from Twitter, Facebook, Gab and Parler. However, unless Trump also creates his own web-hosting company – there will always be a chance that his social media platform could be taken offline.