Category Archives: Facebook

UK CMA Raises Concerns Over Facebook’s Takeover of Giphy



The Competition and Markets Authority has provisionally found Facebook’s merger with Giphy will harm competition between social media platforms and remove a potential challenger in the display advertising market.

CMA stated: The merger brings together Facebook, the largest provider of social media sites and displays advertising in the UK, with Giphy, the largest provider of GIFs. If the Competition and Markets Authority’s competition concerns are ultimately confirmed, it could require Facebook to unwind the deal and sell off Giphy in its entirety.

The CMA also stated: This is particularly concerning given Facebook’s existing market power in display advertising – as part of its assessment, the CMA found that Facebook had a share of around 50% of the £5.5 billion display advertising market in the UK.

Stuart McIntosh, chair of the independent inquiry group carrying out the phase 2 investigation, said:

“Millions of people share GIFs every day with friends, family and colleagues, and this number continues to grow. Giphy’s takeover could see Facebook withdrawing GIFs from competing platforms or requiring more user data in order to access them. It also removes a potential challenger to Facebook in the £5.5 billion display advertising market. None of this would be good news for customers.

“While our investigation has shown serious competition concerns, these are provisional. We will now consult on our findings before completing our review. Should we conclude that the merger is detrimental to the market and social media users, we will take the necessary actions to make sure people are protected.”

Variety reported that the deal between Facebook and Giphy was announced in May of 2020, and is valued at $400 million.

A Facebook spokesperson told Variety “We disagree with the CMA’s preliminary findings, which we do not believe to be supported by the evidence. As we have demonstrated, this merger is in the best interests of people and businesses in the U.K. – and around the world – who use Giphy and our services. We will continue to work with the CMA to address the misconception that the deal harms competition.”

That’s a typical response from Facebook every time it is called out on its terrible actions.

Variety reported that that Giphy currently has no employees, revenue, or assets in the UK, meaning that the CMA has no jurisdiction over the deal. The merger is also being assessed by other competition authorities in the UK.


Facebook Removed Some False Information About COVID-19



Facebook said it has removed a network of accounts from Russia that the company linked to a marketing firm which aimed to enlist influencers to push anti-vaccine content about the COVID-19 vaccines, Reuters reported.

According to Reuters, Facebook said it has banned accounts connected to Fazze, a subsidiary of UK-registered marketing firm AdNow, which primarily conducted its operations from Russia, for violating its policy against foreign interference.

Facebook posted information on its Newsroom that included a Summary of July 2021 Findings.

…In July, we removed two networks from Russia and Myanmar. In this report, we’re also sharing an in-depth analysis by our threat intelligence team into one of the operations – a network from Russia linked to Fazze, a marketing firm registered in the UK – to add to the public reporting on this network’s activity across a dozen different platforms…

Facebook removed 79 Facebook accounts, 13 Pages, eight Groups, and 19 Instagram accounts in Myanmar that targeted domestic audiences and were linked to individuals associated with the Myanmar military.

Facebook also removed 65 Facebook accounts and 243 Instagram accounts from Russia that Facebook linked to Fazze, whose operations were primarily conducted from Russia. Fazze is now banned from Facebook’s platform.

The BBC reported that the accounts in the network spread memes that used images from the Planet of the Apes films to give the impression that the vaccine would turn people into monkeys.

Reuters pointed out that false claims and conspiracy theories about COVID-19 and its vaccines have proliferated on social media in recent months. Major tech firms like Facebook have been criticized by U.S. lawmakers and President Joe Biden’s administration, who say the spread of online lies about vaccines is making it harder to fight the pandemic.

Personally, I think it is good that Facebook finally got around to removing (some) misinformation about COVID-19 and vaccines. Doing so could encourage people who are vaccine-hesitant to consider protecting themselves and their loved ones by getting the vaccine. That won’t happen if all they see on Facebook is misinformation.


Facebook Disabled Accounts Tied to NYU Research Group



Facebook has disabled the personal accounts of a group of New York University researchers who were studying political ads on Facebook’s platform, Bloomberg reported. Facebook has claimed that the researchers were scraping data in violation of Facebook’s terms of service.

The company also cut off the researchers’ access to Facebook’s APIs, technology that is used to share data from Facebook to other apps or services, and disabled other apps and Pages associated with the research project, according to Mike Clark, a director of product management on Facebook’s privacy team.

The project is called NYU Ad Observatory. It appears to be connected to NYU Cybersecurity for Democracy and the NYU Tandon School of Engineering.

According to Bloomberg, Facebook sent the NYU Ad Observatory researchers a cease-and-desist letter last October, demanding that they stop collecting data about Facebook political ads and threatening “additional enforcement action.”

Facebook posted on its Newsroom an article titled: “Research Cannot Be the Justification for Compromising People’s Privacy”. In it, Facebook claims it disabled the accounts, apps, Pages and platform access associated with NYU’s Ad Observatory Project and its operators “after repeated attempts to bring their research into compliance with our Terms”.

Facebook claims that the researchers gathered data by creating a browser extension that was programed “to evade our detection systems and scrape data such as usernames, ads, links to user’s profiles and ‘Why am I seeing this ad?’ information, some of which is not publicly-viewable on Facebook”.

Mozilla posted an article titled “Why Facebook’s claims about the Ad Observer are wrong”. From the article:

“We decided to recommend Ad Observer because our reviews assured us that it respects user privacy and supports transparency. It collects ads, targeting parameters, and metadata associated with the ads. It does not collect personal posts or information about your friends. And it does not compile a user profile on its servers….”

What is Facebook trying to hide?


Facebook Tests Telling Users if Their Post was Removed by Automation



Facebook posted its First Quarterly Update on the Oversight Board. Part of the update involves sharing their progress on the board’s non-binding recommendations.

…The board’s recommendations touch on how we enforce our policies, how we inform users of actions we’ve taken and what they can do about it, and additional transparency reporting…

Facebook provided some examples of things it has done that the board recommended:

They launched and continue to test new user experiences that are more specific about how and why they remove content. I think this is a good idea, because there will always be someone new to Facebook that hasn’t been there long enough to learn what is, and is not, allowed.

They made progress on the specificity of their hate speech notifications by using an additional classifier that is able to predict what kind of hate speech is in the content: violence, dehumanization, mocking hate crimes, visual comparison, inferiority, contempt, cursing, exclusion, and/or slurs. Facebook says that people using Facebook in English will now receive more specific messaging when they violate the hate speech policy. The company will roll out more specific notifications for hate speech violations in other languages in the future.

I’m not sure that more specific notifications will influence people to stop posting hate speech. A user who is angry about having a post removed might double-down and post something even worse. It is unclear to me if Facebook is providing any penalty for posting hate speech (other than having the post removed).

Facebook is running tests to assess the impact of telling people whether automation was involved in the enforcement. This likely means that if a user’s post is removed because it broke the rules, and the decision was made by automation – the user will be informed of that.

Personally, I think that last recommendation could be controversial. An individual person might get really angry after learning that their post was removed by automation instead of a human. This might lead to the user trying to convince Facebook to have a human check over that post (in the hopes of getting a more favorable result). It that happens a lot, I suspect that political leaders might add to the conversation – with their own recommendations.


Facebook Oversight Board Considers Posting of Private Residential Addresses



Facebook announced that its Oversight Board has accepted its first policy advisory opinion referral from Facebook. The Board is asked to consider if posting private residential addresses on Facebook is acceptable.

It is important to know that Facebook itself states that the decision made by the Oversight Board is not binding. To me, that sounds like Facebook is giving itself an opportunity to either go along with – or completely ignore – what the Oversight Board determines.

…Access to residential addresses can be an important tool for journalism, civic activism, and other public discourse. However, exposing this information without consent can also create a risk to an individual’s safety and infringe on privacy…

Facebook is asking the Oversight Board for guidance on the following questions:

  • What information sources should render private information “publicly available?” (For instance, should we factor into our decision whether an image of a residence was already published by another publication?”)
  • What information sources should render private information “publicly available?”
  • Should sources be excluded when they are not easily accessible or trustworthy (such as data aggregator websites, the dark web, or public records that cannot be digitally accessed from a remote location?)
  • If some sources should be excluded, how should Facebook determine the type of sources that won’t be considered in making private information “publicly available?”
  • If an individual’s private information is simultaneously posted to multiple places, including Facebook, should Facebook continue to treat it as private information or treat it as publicly available?
  • Should Facebook remove personal information despite its public availability, for example, in news media, government records, or the dark web? That is, does the availability on Facebook of publicly available but personal information, which may include removing news articles that publish such information or individual posts of publicly available government records?

I think we all know that posting someone else’s personal residential information on the internet, without the person’s permission, is wrong. Facebook shouldn’t need to consult an Oversight Board to understand that.


Facebook Tests a Prompt that Twitter Released Last Year



Today, Facebook tweeted about something that could be a new feature on the platform. Facebook will be testing a prompt that was originally released by Twitter in September of 2020. The prompt will encourage Facebook users to actually read an article before posting a comment about it.

Starting today, we’re testing a way to promote more informed sharing of news articles. If you go to share a news article link you haven’t opened, we’ll show a prompt encouraging you to open it and read it before sharing it with others.

The Verge reported that those who are shown this pop-up can choose to continue sharing it without having opened that article if they want to. To me, it sounds like giving users the option of sharing an unread article and/or commenting on it defeats the entire purpose of the prompt.

A Facebook spokesperson told The Verge that the test of this prompt would be rolled out to 6 percent of Android users worldwide. I suppose that the way the prompt is used – or ignored – might determine whether or not Facebook eventually rolls it out to everyone.

There are some flaws with these type of prompts. It appears that the prompt will require a person to actually open the article through Facebook before they can share it. That doesn’t necessarily mean the person is going to read the article. A person who wants to share an article from a less-than-credible website will be permitted to do so.

Based on the article from The Verge, it appears that the primary reason Facebook is testing this prompt is to combat the spread of misinformation. That is a noble goal – but it won’t work if people choose to share articles from tabloids and other questionable sources.


Facebook Oversight Board Upholds Trump’s Suspension



Facebook created an Oversight Board to make the decision on whether or not overturn Trump’s indefinite suspension. Later on, the Oversight Board chose to delay its decision and extended the public comments deadline for this case.

Today, Facebook posted information in their Newsroom titled: “Oversight Board Upholds Facebook’s Decision to Suspend Donald Trump’s Accounts”. It was written by Nick Clegg, VP of Global Affairs and Communications.

Today, the Oversight Board upheld Facebook’s suspension of former US President Donald Trump’s Facebook and Instagram accounts. As we stated in January, we believe our decision was necessary and right, and we’re pleased the board has recognized that the unprecedented circumstances justified the exceptional measure we took.

In the post, Nick Clegg stated that the Oversight Board has not required Facebook to immediately restore Mr. Trump’s accounts. It also has not specified the appropriate duration of the penalty. The Oversight Board called the open-ended nature of the suspension an “indeterminate and standardless penalty”. As a result, Facebook will now consider the Oversight Board’s decision and determine an action that is clear and proportionate.

In the meantime, Mr. Trump’s accounts remain suspended.

It is worth pointing out that the Oversight Board’s determination is not binding, which leaves room for Facebook to either chose to follow it – or to ignore it completely. The Oversight Board included a list of recommendations for Facebook to consider, but the company doesn’t have to adhere to any of them.

One of the recommendations stated that Facebook should “provide users with accessible information on how many violations, strikes, and penalties have been assessed against them, and the consequences that will follow future violations.” Personally, I think that would be a good thing for Facebook to implement. It could make people think twice before posting something that breaks Facebook’s terms of service.