Category Archives: Facebook

Coalition of States Investigate Instagram’s Effects on Children

A bipartisan coalition of state attorneys general said Thursday it is investigating how Instagram attracts and affects young people, amping up the pressure on parent company Meta Platforms, Inc. over potential harms to its users, the Wall Street Journal reported. According to the Wall Street Journal, the attorneys general said they are investigating whether the company, formerly known as Facebook, violated consumer protection laws and put the public at risk.

As you may recall, in September of 2021, the Wall Street Journal published an article titled “Facebook Knows Instagram is Toxic for Teen Girls, Company Documents Show”.

The documents leaked to the Wall Street included information in which a study by Facebook found that teen Instagram users in the UK and US found that more than 40% of those who reported feeling “unattractive” and said the feelings started when using Instagram. The documents also stated that Facebook found that among teens who had suicidal thoughts, 13 percent of UK users and 6 percent of US users said these impulses could be tracked back to Instagram.

The Los Angeles Times reported that the attorneys general include California, Texas, Nebraska, Massachusetts, Florida, Kentucky, Tennessee, New Jersey and Vermont.

California Attorney General Rob Bonta said, “For too long, Meta has ignored the havoc that Instagram is wreaking on the mental health and well-being of our children and teens. Enough is enough. We’ve undertaken this nationwide investigation to get answers about Meta’s efforts to promote the use of this social media platform to young Californians – and determine if, in doing so, Meta violated the law.”

Nebraska Attorney General Doug Peterson said, “When social media platforms treat our children as mere commodities to manipulate for longer screen time engagement and data extraction, it becomes imperative for state attorneys general to engage our investigative authority under our consumer laws.”

It seems to me that it is time for Facebook (Meta) to possibly face some consequences. When a bipartisan collation of attorneys general work together on something, it seems likely they will be able to enforce changes.

Facebook Doesn’t Want You to Have a Chronological News Feed

What’s the most important thing you want to see when you log into Facebook? For many, it is posts from friends and family. Facebook is the easiest way to connect with not only relatives, but also people that you attended high school with.

The Washington Post reported that Facebook has explored what happens when it turns off its controversial news feed ranking system – the software that decides for each user which posts they’ll see and in what order. That leaves users to see all the posts from their friends in simple, chronological order. According to The Washington Post, Facebook’s researchers decided that users are better off with Facebook’s software calling the shots.

In its article, The Washington Post noted that information from whistleblower Frances Haugen argued that Facebook’s algorithm was central to Facebook’s problems. It amplified and rewarded “hateful, divisive, misleading and sometimes outright false content by putting it at the top of users’ feeds”.

Axios reported that a bipartisan group of House lawmakers introduced a bill that would require online platforms to let users opt-out of having personal data-driven algorithms select the content they see. It is called the Filter Bubble Transparency Act. The purpose of this bill is “To require that internet platforms give users that option to engage with a platform without being manipulated by algorithms driven by user-specific data.”

There is also a Senate version of the Filter Bubble Transparency Act. It is also bipartisan. The bill would require large-scale internet platforms that collect data from more than 1 million users and gross more than $50 million per year to provide greater transparency to consumers and allow users to view content that has not been curated as a result of a secret algorithm.

Senator John Thune stated on his website that the Filter Bubble Transparency Act “would make it easier for internet platform users to understand the potential manipulation that exists with secret algorithms and require large-scale platforms to allow those users to consume information outside of that potential manipulation zone or ‘filter bubble’”.

Personally, I’ve always thought that the algorithms used by social media companies are manipulative. In my opinion, non-chronological algorithms are used to evoke rage and/or fear in users, and also are a pipeline to spread misinformation. We would all be emotionally healthier if we could opt-out of the imposed algorithm and into a chronological timeline of self-selected topics and users.

Facebook Allowed Plagiarized Content on its Platform

The content you view on Facebook might not actually be created by the account you see it on. It may have come from someone else’s Facebook profile – without any credit given to the person who created it.

Facebook has allowed plagiarized and recycled content to flourish on its platform despite having policies against it, the tech giant’s researchers warned in internal memos. This was reported by The Wall Street Journal as part of its series on “The Facebook Files”.

About 40% of the traffic to Facebook pages at one point in 2018 went to pages that stole or repurposed most of their content, according to a research report that year by Facebook senior data analyst Jeff Allen, one of a dozen internal communications reviewed by The Wall Street Journal. Pages are used by businesses and organizations to disseminate content on Facebook, while individual users put content on what Facebook calls “profiles”.

According to The Wall Street Journal, the researchers also wrote that Facebook has been slow to crack down on copyright infringement for fear of opening itself to legal liability.

In May of this year, (according to The Wall Street Journal), Facebook began reporting for the first time the number of copyright violations it said it identified and removed proactively, saying at the time the company had been building the technology to do so “over the past few years”. The Wall Street Journal also reported that Facebook’s penalties for posting unoriginal content aren’t great enough to meaningfully discourage the practice.

It appears, based on what The Wall Street Journal reported, that the Top-20 posts included 15 that were copied outright or repurposed from other Facebook pages or social networks such as Reddit and Twitter. One post was deleted and only four were completely original pieces of content.

There are two paragraphs from The Wall Street Journal that stood out to me. One states that posting unoriginal content continues to be a formula for success on Facebook. The other states that the tactic is an effective way to build a large audience on Facebook and has been used by foreign and domestic groups that post divisive content and peddle false information on social media.

I stopped using Facebook a long time ago. If you are still using that platform, it is time to seriously consider whether or not the photo of a cute puppy you see actually came from the profile you are looking at. If you don’t know the person behind the profile personally, you need to consider that the content on it could have been plagiarized.

Facebook Settles Claims Over Discrimination Against U.S. Workers

Facebook settled claims that it refused to recruit or hire U.S. workers for positions it set aside for temporary visa holders, CNBC reported. According to CNBC, Facebook settled with not only the Department of Labor, but also the Department of Justice. These were two separate lawsuits.The Department of Justice (DOJ) posted a release on its website that shared information about these lawsuits.

The Justice Department’s settlement resolves its claims that Facebook routinely refused to recruit, consider or hire U.S. workers, a group that includes U.S. citizens, U.S. nationals, asylees, refugees and lawful permanent residents, for positions it has reserved for temporary visa holders in connection with the PERM process. Additionally, the Labor Department’s settlement resolves issues it separately identified through audit examinations of Facebook’s recruitment activities related to its PERM applications filed with the Employment and Training Administration’s Office of Foreign Labor Certification (OFLC).

PERM stands for “permanent labor certification program.”

Specifically, the lawsuit alleged that, in contrast to its standard recruitment practices, Facebook used recruiting methods designed to deter U.S. workers who applies to the positions, such as requiring applications to be submitted by mail only; refused to consider U.S. workers who applied to the positions; and hired only temporary visa holders.

According to the Justice Department’s lawsuit, Facebook’s hiring for these positions intentionally discriminated against U.S. workers because of their citizenship or immigration status, in violation of the anti-discrimination provision of the Immigration and Nationality Act (INA).

Under the DOJ’s settlement, Facebook will pay a civil penalty of $4.75 million to the United States, will pay up to $9.5 million to eligible victims of Facebook’s alleged discrimination, and train its employees on the anti-discrimination requirements of the INA. It also must accept electronic resumes or applications from all U.S. workers who apply.

The DOJ says that this civil penalty backpay fund represents the largest fine and monetary award that the Department of Justice ever has recovered in the 35-year history of the INA’s anti-discrimination provision.

Facebook has been in a bit of trouble lately. The Wall Street Journal reported on Facebook documents that had been leaked to the newspaper. A whistleblower shared what she knew about the behind the scenes of Facebook on “60 Minutes”.

Today, Facebook was fined by the Department of Justice for being less than honest regarding hiring workers. It is a small fine, compared to the vast wealth of Facebook – but it still sends a message to Facebook to stop being awful.

Facebook Still Has A Problem With Hate Speech

Facebook’s AI can’t consistently identify first-person shooting videos, racist rants, and the difference between cockfighting and car crashes. This comes from internal Facebook documents that were reviewed by The Wall Street Journal.

On hate speech, the documents show, Facebook employees have estimated the company removes only a sliver of the posts that violate its rules – a low-single-digit percent, they say. When Facebook’s algorithms aren’t certain enough that the content violates the rules to delete it, the platform shows that material to users less often – but the accounts that posted the material go unpunished.

According to The Wall Street Journal, a team of Facebook employees concluded that the AI systems were removing posts that generated 3% to 5% of the views of hate speech on the platform, 0.6% of all content that violated Facebook’s policies against violence and incitement.

Engadget reported that “there’s little doubt that Facebook is engaged in some spin”. In testimony, whistleblower Frances Haugen asserted that Facebook can only catch a “very tiny minority” of offending material. According to Engadget, Haugen also alleged that Facebook resisted implementing safer algorithms and other efforts to minimize hateful and divisive distractions.

Facebook’s VP of Integrity, Guy Rosen, posted “Hate Speech Prevalence Has Dropped by Almost 50% on Facebook”. This was published the same day as The Wall Street Journal article. Here is a small portion of that post:

“Data pulled from leaked documents is being used to create a narrative that the technology we use to fight hate speech is inadequate and that we deliberately misrepresent our progress. This is not true. We don’t want to see hate on our platform, nor do our users or advertisers, and we are transparent about our work to remove it. What these documents demonstrate is that our integrity work is a multi-year journey. While we will never be perfect, our teams continually work to build our systems, identify issues and build solutions.”

According to Facebook, the documents that were sent to The Wall Street Journal “misrepresent” what Facebook is doing. Personally, I find it hard to believe that Facebook is telling the truth.

Facebook Provided Details About the Outage

On October 4, 2021, Facebook suddenly went down. The problem extended to Instagram and WhatsApp. In my opinion, this situation might be a good example of why allowing one giant company to continually purchase its competitors is a bad idea. If those services were independent from each other – the problem that made Facebook inaccessible would not have extended to Instagram and WhatsApp.

On the day of the outage, Facebook tweeted: “We’re aware that some people are having trouble accessing our apps and products. We’re working to get things back to normal as quickly as possible, and we apologize for any inconvenience.”

There is something amusing about Facebook having to resort to Twitter in order to connect to people who could not longer access Facebook’s products.

Yesterday, the Facebook Engineering blog posted an article titled: “More details about the October 4 outage”. It was written by Santosh Janardhan. Here are a few key paragraphs from the blog post:

“…This outage was triggered by the system that manages our global backbone network capacity. The backbone is the network Facebook has built to connect all our computing facilities together, which consists of tens of thousands of miles of fiber-optic cables crossing the globe and linking all our data centers.”

“… The data traffic between all these computing facilities is managed by routers, which figure out where to send all the incoming and outgoing data. And in the extensive day-to-day work of maintaining this infrastructure, our engineers often need to take part of the backbone offline for maintenance – perhaps repairing a fiber line, adding more capacity or updating the router itself.

“This was the source of yesterday’s outage. During one of these routine maintenance jobs, a command was issued with the intention to assess the availability of global backbone capacity, which unintentionally took down all the connections in our backbone network, effectively disconnecting Facebook data centers globally. Our systems are designed to audit commands like these to prevent mistakes like this, but a bug in that audit tool prevented it from properly stopping the command…”

A “bug” in Facebook’s own audit tools crashed Facebook. This situation makes me think of the horror movies where someone is absolutely terrified and calls 911, only to learn the scary calls they had been receiving came from inside the house.

Facebook Denies Instagram is “Toxic for Teens”

Facebook denies claims made by The Wall Street Journal about Instagram being “toxic for teen girls”. In its Newsroom, Facebook posted the following claims:

  •  Contrary to the Wall Street Journal’s characterization, Instagram’s research shows that on 11 of 12 well-being issues, teenage girls who said they struggled with those difficult issues also said that Instagram made them better rather than worse.
  •  This research, like external research on these issues, found teens report having both positive and negative experiences with social media.
  • We do internal research to find out how we can best improve the experience for our teens, and our research has informed product changes as well as new resources.

CNBC reported that Facebook executive Antigone Davis, global head of safety, will testify before the Senate Commerce subcommittee on consumer protection on September 30, 2021. The hearing focuses on The Wall Street Journal’s article that shows Instagram had a negative effect on many teen girls’ mental health.

Personally, it sounds to me like Facebook got caught, and is trying to salvage its reputation before the Senate subcommittee hearing begins.

The Wall Street Journal recently published an article titled: “Facebook Knows Instagram Is Toxic for Teen Girls, Company Documents Show”. According to The Verge, that information came from leaked documents that had been leaked to The Wall Street Journal.

The Verge pointed out some of what The Wall Street Journal’s findings:

  • A study by Facebook of teen Instagram users in the US and UK found that more than 40% of those who reported feeling “unattractive” said the feelings started when using Instagram.
  • Research reviewed by Facebook’s top executives concluded that Instagram was engineered towards greater “social comparison” than rival apps like TikTok and Snapchat. TikTok is focused on performance and Snapchat is uses jokey filters that focus on the face. Instagram spotlights users’ bodies and lifestyles.
  •  “Teens blame Instagram for increases in the rate of anxiety and depression,” said internal research by Facebook presented in 2019, and that “This reaction was unprompted and consistent across all groups”.
  • Facebook found that among the teens who said they had suicidal thoughts, 13 percent of UK users and 6 percent of US users said these impulses could be tracked back to the app.