Tag Archives: Facebook

Facebook Denies Instagram is “Toxic for Teens”



Facebook denies claims made by The Wall Street Journal about Instagram being “toxic for teen girls”. In its Newsroom, Facebook posted the following claims:

  •  Contrary to the Wall Street Journal’s characterization, Instagram’s research shows that on 11 of 12 well-being issues, teenage girls who said they struggled with those difficult issues also said that Instagram made them better rather than worse.
  •  This research, like external research on these issues, found teens report having both positive and negative experiences with social media.
  • We do internal research to find out how we can best improve the experience for our teens, and our research has informed product changes as well as new resources.

CNBC reported that Facebook executive Antigone Davis, global head of safety, will testify before the Senate Commerce subcommittee on consumer protection on September 30, 2021. The hearing focuses on The Wall Street Journal’s article that shows Instagram had a negative effect on many teen girls’ mental health.

Personally, it sounds to me like Facebook got caught, and is trying to salvage its reputation before the Senate subcommittee hearing begins.

The Wall Street Journal recently published an article titled: “Facebook Knows Instagram Is Toxic for Teen Girls, Company Documents Show”. According to The Verge, that information came from leaked documents that had been leaked to The Wall Street Journal.

The Verge pointed out some of what The Wall Street Journal’s findings:

  • A study by Facebook of teen Instagram users in the US and UK found that more than 40% of those who reported feeling “unattractive” said the feelings started when using Instagram.
  • Research reviewed by Facebook’s top executives concluded that Instagram was engineered towards greater “social comparison” than rival apps like TikTok and Snapchat. TikTok is focused on performance and Snapchat is uses jokey filters that focus on the face. Instagram spotlights users’ bodies and lifestyles.
  •  “Teens blame Instagram for increases in the rate of anxiety and depression,” said internal research by Facebook presented in 2019, and that “This reaction was unprompted and consistent across all groups”.
  • Facebook found that among the teens who said they had suicidal thoughts, 13 percent of UK users and 6 percent of US users said these impulses could be tracked back to the app.

UK CMA Raises Concerns Over Facebook’s Takeover of Giphy



The Competition and Markets Authority has provisionally found Facebook’s merger with Giphy will harm competition between social media platforms and remove a potential challenger in the display advertising market.

CMA stated: The merger brings together Facebook, the largest provider of social media sites and displays advertising in the UK, with Giphy, the largest provider of GIFs. If the Competition and Markets Authority’s competition concerns are ultimately confirmed, it could require Facebook to unwind the deal and sell off Giphy in its entirety.

The CMA also stated: This is particularly concerning given Facebook’s existing market power in display advertising – as part of its assessment, the CMA found that Facebook had a share of around 50% of the £5.5 billion display advertising market in the UK.

Stuart McIntosh, chair of the independent inquiry group carrying out the phase 2 investigation, said:

“Millions of people share GIFs every day with friends, family and colleagues, and this number continues to grow. Giphy’s takeover could see Facebook withdrawing GIFs from competing platforms or requiring more user data in order to access them. It also removes a potential challenger to Facebook in the £5.5 billion display advertising market. None of this would be good news for customers.

“While our investigation has shown serious competition concerns, these are provisional. We will now consult on our findings before completing our review. Should we conclude that the merger is detrimental to the market and social media users, we will take the necessary actions to make sure people are protected.”

Variety reported that the deal between Facebook and Giphy was announced in May of 2020, and is valued at $400 million.

A Facebook spokesperson told Variety “We disagree with the CMA’s preliminary findings, which we do not believe to be supported by the evidence. As we have demonstrated, this merger is in the best interests of people and businesses in the U.K. – and around the world – who use Giphy and our services. We will continue to work with the CMA to address the misconception that the deal harms competition.”

That’s a typical response from Facebook every time it is called out on its terrible actions.

Variety reported that that Giphy currently has no employees, revenue, or assets in the UK, meaning that the CMA has no jurisdiction over the deal. The merger is also being assessed by other competition authorities in the UK.


Facebook Removed Some False Information About COVID-19



Facebook said it has removed a network of accounts from Russia that the company linked to a marketing firm which aimed to enlist influencers to push anti-vaccine content about the COVID-19 vaccines, Reuters reported.

According to Reuters, Facebook said it has banned accounts connected to Fazze, a subsidiary of UK-registered marketing firm AdNow, which primarily conducted its operations from Russia, for violating its policy against foreign interference.

Facebook posted information on its Newsroom that included a Summary of July 2021 Findings.

…In July, we removed two networks from Russia and Myanmar. In this report, we’re also sharing an in-depth analysis by our threat intelligence team into one of the operations – a network from Russia linked to Fazze, a marketing firm registered in the UK – to add to the public reporting on this network’s activity across a dozen different platforms…

Facebook removed 79 Facebook accounts, 13 Pages, eight Groups, and 19 Instagram accounts in Myanmar that targeted domestic audiences and were linked to individuals associated with the Myanmar military.

Facebook also removed 65 Facebook accounts and 243 Instagram accounts from Russia that Facebook linked to Fazze, whose operations were primarily conducted from Russia. Fazze is now banned from Facebook’s platform.

The BBC reported that the accounts in the network spread memes that used images from the Planet of the Apes films to give the impression that the vaccine would turn people into monkeys.

Reuters pointed out that false claims and conspiracy theories about COVID-19 and its vaccines have proliferated on social media in recent months. Major tech firms like Facebook have been criticized by U.S. lawmakers and President Joe Biden’s administration, who say the spread of online lies about vaccines is making it harder to fight the pandemic.

Personally, I think it is good that Facebook finally got around to removing (some) misinformation about COVID-19 and vaccines. Doing so could encourage people who are vaccine-hesitant to consider protecting themselves and their loved ones by getting the vaccine. That won’t happen if all they see on Facebook is misinformation.


FTC Pushes Back Against Facebook’s Removal of NYC Ad Observatory



The Federal Trade Commission (FTC) sent a letter to Mark Zuckerberg regarding the company’s removal of NYC’s Ad Observatory from the platform. The letter was written by Acting Director of the Bureau of Consumer Protection, Samuel Levine.

Facebook posted on its Newsroom that it had disabled the accounts, apps, Pages, and platform access associated with NYU’s Ad Observatory Project and its operators.

Facebook claimed that the researchers were gathering data by creating a browser extension that was programmed to evade Facebook’s detection system and scrape data such as usernames, ads, links to user’s profiles and ‘Why am I seeing this ad?’ information, some of which is not publicly viewable on Facebook.”

Mozilla debunked Facebook’s claim, pointing out that Mozilla decided to recommend Ad Observer because their review of it assured them it respected user privacy. According to Mozilla, it does not collect personal posts or information about friends and it does not compile a user profile on its server.

Here are some parts of the letter from the FTC to Mark Zuckerberg:

“I write concerning Facebook’s recent insinuation that its actions against an academic research project conducted by NYU’s Ad Observatory were required by the company’s consent decree with the Federal Trade Commission. As the company has since acknowledged, this is inaccurate. The FTC is committed to protecting the privacy of people, and efforts to shield targeted advertising practices from security run counter to that mission…

“…Had you honored your commitment to contact us in advance, we would have pointed out that the consent decree does not bar Facebook from creating exceptions for good-faith research in the public interest. Indeed, the FTC supports efforts to shed light on opaque business practices, especially around surveillance-based advertising…”

Clearly, the FTC does not agree with Facebook’s decision to remove the NYU’s Ad Observatory project from its platform. I wonder what Facebook is trying to hide? It must have something to do with the political ads the Ad Observatory was studying.


Facebook Disabled Accounts Tied to NYU Research Group



Facebook has disabled the personal accounts of a group of New York University researchers who were studying political ads on Facebook’s platform, Bloomberg reported. Facebook has claimed that the researchers were scraping data in violation of Facebook’s terms of service.

The company also cut off the researchers’ access to Facebook’s APIs, technology that is used to share data from Facebook to other apps or services, and disabled other apps and Pages associated with the research project, according to Mike Clark, a director of product management on Facebook’s privacy team.

The project is called NYU Ad Observatory. It appears to be connected to NYU Cybersecurity for Democracy and the NYU Tandon School of Engineering.

According to Bloomberg, Facebook sent the NYU Ad Observatory researchers a cease-and-desist letter last October, demanding that they stop collecting data about Facebook political ads and threatening “additional enforcement action.”

Facebook posted on its Newsroom an article titled: “Research Cannot Be the Justification for Compromising People’s Privacy”. In it, Facebook claims it disabled the accounts, apps, Pages and platform access associated with NYU’s Ad Observatory Project and its operators “after repeated attempts to bring their research into compliance with our Terms”.

Facebook claims that the researchers gathered data by creating a browser extension that was programed “to evade our detection systems and scrape data such as usernames, ads, links to user’s profiles and ‘Why am I seeing this ad?’ information, some of which is not publicly-viewable on Facebook”.

Mozilla posted an article titled “Why Facebook’s claims about the Ad Observer are wrong”. From the article:

“We decided to recommend Ad Observer because our reviews assured us that it respects user privacy and supports transparency. It collects ads, targeting parameters, and metadata associated with the ads. It does not collect personal posts or information about your friends. And it does not compile a user profile on its servers….”

What is Facebook trying to hide?


Facebook Tests Telling Users if Their Post was Removed by Automation



Facebook posted its First Quarterly Update on the Oversight Board. Part of the update involves sharing their progress on the board’s non-binding recommendations.

…The board’s recommendations touch on how we enforce our policies, how we inform users of actions we’ve taken and what they can do about it, and additional transparency reporting…

Facebook provided some examples of things it has done that the board recommended:

They launched and continue to test new user experiences that are more specific about how and why they remove content. I think this is a good idea, because there will always be someone new to Facebook that hasn’t been there long enough to learn what is, and is not, allowed.

They made progress on the specificity of their hate speech notifications by using an additional classifier that is able to predict what kind of hate speech is in the content: violence, dehumanization, mocking hate crimes, visual comparison, inferiority, contempt, cursing, exclusion, and/or slurs. Facebook says that people using Facebook in English will now receive more specific messaging when they violate the hate speech policy. The company will roll out more specific notifications for hate speech violations in other languages in the future.

I’m not sure that more specific notifications will influence people to stop posting hate speech. A user who is angry about having a post removed might double-down and post something even worse. It is unclear to me if Facebook is providing any penalty for posting hate speech (other than having the post removed).

Facebook is running tests to assess the impact of telling people whether automation was involved in the enforcement. This likely means that if a user’s post is removed because it broke the rules, and the decision was made by automation – the user will be informed of that.

Personally, I think that last recommendation could be controversial. An individual person might get really angry after learning that their post was removed by automation instead of a human. This might lead to the user trying to convince Facebook to have a human check over that post (in the hopes of getting a more favorable result). It that happens a lot, I suspect that political leaders might add to the conversation – with their own recommendations.


Facebook Oversight Board Considers Posting of Private Residential Addresses



Facebook announced that its Oversight Board has accepted its first policy advisory opinion referral from Facebook. The Board is asked to consider if posting private residential addresses on Facebook is acceptable.

It is important to know that Facebook itself states that the decision made by the Oversight Board is not binding. To me, that sounds like Facebook is giving itself an opportunity to either go along with – or completely ignore – what the Oversight Board determines.

…Access to residential addresses can be an important tool for journalism, civic activism, and other public discourse. However, exposing this information without consent can also create a risk to an individual’s safety and infringe on privacy…

Facebook is asking the Oversight Board for guidance on the following questions:

  • What information sources should render private information “publicly available?” (For instance, should we factor into our decision whether an image of a residence was already published by another publication?”)
  • What information sources should render private information “publicly available?”
  • Should sources be excluded when they are not easily accessible or trustworthy (such as data aggregator websites, the dark web, or public records that cannot be digitally accessed from a remote location?)
  • If some sources should be excluded, how should Facebook determine the type of sources that won’t be considered in making private information “publicly available?”
  • If an individual’s private information is simultaneously posted to multiple places, including Facebook, should Facebook continue to treat it as private information or treat it as publicly available?
  • Should Facebook remove personal information despite its public availability, for example, in news media, government records, or the dark web? That is, does the availability on Facebook of publicly available but personal information, which may include removing news articles that publish such information or individual posts of publicly available government records?

I think we all know that posting someone else’s personal residential information on the internet, without the person’s permission, is wrong. Facebook shouldn’t need to consult an Oversight Board to understand that.