Category Archives: Facebook

Facebook Messenger Updated End-to-End Encrypted Chats



Facebook Messenger announced that they are rolling out the option to make voice and video calls end-to-end encrypted on Messenger, along with updated controls for disappearing messages.

People expect their messaging apps to be secure and private, and with these new features, we’re giving them more control over how private they want their calls and chats to be.

Here is a quick look at what’s new:

Option for end-to-end encrypted voice and video calls: Messenger says it offered the option to secure your one-on-one text chats with end-to-end encryption since 2016. Now, they are introducing calling to this chat mode so you can secure your audio and video calls with this same technology, if you choose.

Updated controls over Disappearing Messages: Messenger also updated the expiring message feature within their end-to-end encrypted chats. They updated this mode to provide more options for people in the chat to choose the amount of time before all new messages disappear. From as few as 5 seconds to as long as 24 hours.

Here are some things Messenger says are coming soon:

End-to-end encrypted group chats and calls in Messenger: They will begin testing end-to-end encryption for group chats, including voice and video calls, for friends and family that already have an existing chat thread or are already connected. They will also begin a test for your delivery controls to work with your end-to-end encrypted chats. That way, you can prevent unwanted interactions by deciding who can reach your chat lists, who goes to your requests folder, and who can’t message you at all.

Opt-in end-to-end encryption for Instagram DMs: Messenger will also do a limited test with adults in certain countries that lets them opt-in to end-to-end encrypted messages and calls for one-on-one conversations on Instagram. Similar to how Messenger works today, you need to have an existing chat or be following each other to start an end-to-end encrypted DM. As always, you can block someone you don’t want to talk to or report something to Messenger if it doesn’t seem right.

I find it interesting that this Messenger post appeared shortly after Apple’s controversial decision to scan the iCloud photos of some users hit the news. Messenger, which is part of Facebook, appears to be trying to look like the “good guys” in this situation.

It is important to keep in mind that Facebook (and its extensions) will continue to gather your data and track you. They are giving you the option to use end-to-end encryption on calls and videos, but probably hope you won’t actually opt-in.


UK CMA Raises Concerns Over Facebook’s Takeover of Giphy



The Competition and Markets Authority has provisionally found Facebook’s merger with Giphy will harm competition between social media platforms and remove a potential challenger in the display advertising market.

CMA stated: The merger brings together Facebook, the largest provider of social media sites and displays advertising in the UK, with Giphy, the largest provider of GIFs. If the Competition and Markets Authority’s competition concerns are ultimately confirmed, it could require Facebook to unwind the deal and sell off Giphy in its entirety.

The CMA also stated: This is particularly concerning given Facebook’s existing market power in display advertising – as part of its assessment, the CMA found that Facebook had a share of around 50% of the £5.5 billion display advertising market in the UK.

Stuart McIntosh, chair of the independent inquiry group carrying out the phase 2 investigation, said:

“Millions of people share GIFs every day with friends, family and colleagues, and this number continues to grow. Giphy’s takeover could see Facebook withdrawing GIFs from competing platforms or requiring more user data in order to access them. It also removes a potential challenger to Facebook in the £5.5 billion display advertising market. None of this would be good news for customers.

“While our investigation has shown serious competition concerns, these are provisional. We will now consult on our findings before completing our review. Should we conclude that the merger is detrimental to the market and social media users, we will take the necessary actions to make sure people are protected.”

Variety reported that the deal between Facebook and Giphy was announced in May of 2020, and is valued at $400 million.

A Facebook spokesperson told Variety “We disagree with the CMA’s preliminary findings, which we do not believe to be supported by the evidence. As we have demonstrated, this merger is in the best interests of people and businesses in the U.K. – and around the world – who use Giphy and our services. We will continue to work with the CMA to address the misconception that the deal harms competition.”

That’s a typical response from Facebook every time it is called out on its terrible actions.

Variety reported that that Giphy currently has no employees, revenue, or assets in the UK, meaning that the CMA has no jurisdiction over the deal. The merger is also being assessed by other competition authorities in the UK.


Facebook Removed Some False Information About COVID-19



Facebook said it has removed a network of accounts from Russia that the company linked to a marketing firm which aimed to enlist influencers to push anti-vaccine content about the COVID-19 vaccines, Reuters reported.

According to Reuters, Facebook said it has banned accounts connected to Fazze, a subsidiary of UK-registered marketing firm AdNow, which primarily conducted its operations from Russia, for violating its policy against foreign interference.

Facebook posted information on its Newsroom that included a Summary of July 2021 Findings.

…In July, we removed two networks from Russia and Myanmar. In this report, we’re also sharing an in-depth analysis by our threat intelligence team into one of the operations – a network from Russia linked to Fazze, a marketing firm registered in the UK – to add to the public reporting on this network’s activity across a dozen different platforms…

Facebook removed 79 Facebook accounts, 13 Pages, eight Groups, and 19 Instagram accounts in Myanmar that targeted domestic audiences and were linked to individuals associated with the Myanmar military.

Facebook also removed 65 Facebook accounts and 243 Instagram accounts from Russia that Facebook linked to Fazze, whose operations were primarily conducted from Russia. Fazze is now banned from Facebook’s platform.

The BBC reported that the accounts in the network spread memes that used images from the Planet of the Apes films to give the impression that the vaccine would turn people into monkeys.

Reuters pointed out that false claims and conspiracy theories about COVID-19 and its vaccines have proliferated on social media in recent months. Major tech firms like Facebook have been criticized by U.S. lawmakers and President Joe Biden’s administration, who say the spread of online lies about vaccines is making it harder to fight the pandemic.

Personally, I think it is good that Facebook finally got around to removing (some) misinformation about COVID-19 and vaccines. Doing so could encourage people who are vaccine-hesitant to consider protecting themselves and their loved ones by getting the vaccine. That won’t happen if all they see on Facebook is misinformation.


Facebook Disabled Accounts Tied to NYU Research Group



Facebook has disabled the personal accounts of a group of New York University researchers who were studying political ads on Facebook’s platform, Bloomberg reported. Facebook has claimed that the researchers were scraping data in violation of Facebook’s terms of service.

The company also cut off the researchers’ access to Facebook’s APIs, technology that is used to share data from Facebook to other apps or services, and disabled other apps and Pages associated with the research project, according to Mike Clark, a director of product management on Facebook’s privacy team.

The project is called NYU Ad Observatory. It appears to be connected to NYU Cybersecurity for Democracy and the NYU Tandon School of Engineering.

According to Bloomberg, Facebook sent the NYU Ad Observatory researchers a cease-and-desist letter last October, demanding that they stop collecting data about Facebook political ads and threatening “additional enforcement action.”

Facebook posted on its Newsroom an article titled: “Research Cannot Be the Justification for Compromising People’s Privacy”. In it, Facebook claims it disabled the accounts, apps, Pages and platform access associated with NYU’s Ad Observatory Project and its operators “after repeated attempts to bring their research into compliance with our Terms”.

Facebook claims that the researchers gathered data by creating a browser extension that was programed “to evade our detection systems and scrape data such as usernames, ads, links to user’s profiles and ‘Why am I seeing this ad?’ information, some of which is not publicly-viewable on Facebook”.

Mozilla posted an article titled “Why Facebook’s claims about the Ad Observer are wrong”. From the article:

“We decided to recommend Ad Observer because our reviews assured us that it respects user privacy and supports transparency. It collects ads, targeting parameters, and metadata associated with the ads. It does not collect personal posts or information about your friends. And it does not compile a user profile on its servers….”

What is Facebook trying to hide?


Facebook Tests Telling Users if Their Post was Removed by Automation



Facebook posted its First Quarterly Update on the Oversight Board. Part of the update involves sharing their progress on the board’s non-binding recommendations.

…The board’s recommendations touch on how we enforce our policies, how we inform users of actions we’ve taken and what they can do about it, and additional transparency reporting…

Facebook provided some examples of things it has done that the board recommended:

They launched and continue to test new user experiences that are more specific about how and why they remove content. I think this is a good idea, because there will always be someone new to Facebook that hasn’t been there long enough to learn what is, and is not, allowed.

They made progress on the specificity of their hate speech notifications by using an additional classifier that is able to predict what kind of hate speech is in the content: violence, dehumanization, mocking hate crimes, visual comparison, inferiority, contempt, cursing, exclusion, and/or slurs. Facebook says that people using Facebook in English will now receive more specific messaging when they violate the hate speech policy. The company will roll out more specific notifications for hate speech violations in other languages in the future.

I’m not sure that more specific notifications will influence people to stop posting hate speech. A user who is angry about having a post removed might double-down and post something even worse. It is unclear to me if Facebook is providing any penalty for posting hate speech (other than having the post removed).

Facebook is running tests to assess the impact of telling people whether automation was involved in the enforcement. This likely means that if a user’s post is removed because it broke the rules, and the decision was made by automation – the user will be informed of that.

Personally, I think that last recommendation could be controversial. An individual person might get really angry after learning that their post was removed by automation instead of a human. This might lead to the user trying to convince Facebook to have a human check over that post (in the hopes of getting a more favorable result). It that happens a lot, I suspect that political leaders might add to the conversation – with their own recommendations.


Facebook Oversight Board Considers Posting of Private Residential Addresses



Facebook announced that its Oversight Board has accepted its first policy advisory opinion referral from Facebook. The Board is asked to consider if posting private residential addresses on Facebook is acceptable.

It is important to know that Facebook itself states that the decision made by the Oversight Board is not binding. To me, that sounds like Facebook is giving itself an opportunity to either go along with – or completely ignore – what the Oversight Board determines.

…Access to residential addresses can be an important tool for journalism, civic activism, and other public discourse. However, exposing this information without consent can also create a risk to an individual’s safety and infringe on privacy…

Facebook is asking the Oversight Board for guidance on the following questions:

  • What information sources should render private information “publicly available?” (For instance, should we factor into our decision whether an image of a residence was already published by another publication?”)
  • What information sources should render private information “publicly available?”
  • Should sources be excluded when they are not easily accessible or trustworthy (such as data aggregator websites, the dark web, or public records that cannot be digitally accessed from a remote location?)
  • If some sources should be excluded, how should Facebook determine the type of sources that won’t be considered in making private information “publicly available?”
  • If an individual’s private information is simultaneously posted to multiple places, including Facebook, should Facebook continue to treat it as private information or treat it as publicly available?
  • Should Facebook remove personal information despite its public availability, for example, in news media, government records, or the dark web? That is, does the availability on Facebook of publicly available but personal information, which may include removing news articles that publish such information or individual posts of publicly available government records?

I think we all know that posting someone else’s personal residential information on the internet, without the person’s permission, is wrong. Facebook shouldn’t need to consult an Oversight Board to understand that.


Facebook Tests a Prompt that Twitter Released Last Year



Today, Facebook tweeted about something that could be a new feature on the platform. Facebook will be testing a prompt that was originally released by Twitter in September of 2020. The prompt will encourage Facebook users to actually read an article before posting a comment about it.

Starting today, we’re testing a way to promote more informed sharing of news articles. If you go to share a news article link you haven’t opened, we’ll show a prompt encouraging you to open it and read it before sharing it with others.

The Verge reported that those who are shown this pop-up can choose to continue sharing it without having opened that article if they want to. To me, it sounds like giving users the option of sharing an unread article and/or commenting on it defeats the entire purpose of the prompt.

A Facebook spokesperson told The Verge that the test of this prompt would be rolled out to 6 percent of Android users worldwide. I suppose that the way the prompt is used – or ignored – might determine whether or not Facebook eventually rolls it out to everyone.

There are some flaws with these type of prompts. It appears that the prompt will require a person to actually open the article through Facebook before they can share it. That doesn’t necessarily mean the person is going to read the article. A person who wants to share an article from a less-than-credible website will be permitted to do so.

Based on the article from The Verge, it appears that the primary reason Facebook is testing this prompt is to combat the spread of misinformation. That is a noble goal – but it won’t work if people choose to share articles from tabloids and other questionable sources.