Category Archives: Facebook

Facebook is Removing Holocaust Denial Content

Facebook announced that it has updated its hate speech policy to prohibit any content that denies or distorts the Holocaust. This decision is part of Facebook’s ongoing effort to remove hate speech from its platform.

Today’s announcement marks another step in our efforts to fight hate on our services. Our decision is supported by the well-documented rise in anti-Semitism globally and the alarming level of ignorance about the Holocaust, especially among young people. According to a recent survey of adults in the US aged 18-39, almost a quarter said they believed the Holocaust was a myth, that it had been exaggerated, or that they weren’t sure.

Beginning later this year, Facebook will direct anyone to credible information off Facebook if they search for terms associated with the Holocaust or its denial on Facebook’s platform.

Facebook states that enforcement of these policies cannot happen overnight. They need time to train their reviewers and systems on enforcement of the new policies. To me, it sounds like reporting content that violates this new policy would be welcomed by Facebook. What better way to train reviewers and systems on enforcement than by giving them plenty of examples that (more than likely) are in violation of this new policy?

As a former teacher, I am absolutely astounded that so many people are ignorant about the Holocaust. My assumption was that this historical topic was still being taught to students. As such, it is good that Facebook will direct people who are ignorant about the Holocaust to credible resources where they can learn about it.

Facebook Won’t Accept Political Ads in Week Before the Election

Facebook announced some steps it is taking to help secure the integrity of the US elections. According to Facebook, these steps are to encourage voting, connect people to authoritative information, and reduce the risk of post-election confusion.

Mark Zuckerberg made a lengthy post on Facebook about this. Here is a small portion of it:

The US elections are just two months away, and with Covid-19 affecting communities across the country, I’m concerned about the challenges people could face when voting. I’m also worried that with our nation so divided and election results potentially taking days or even weeks to be finalized, there could be an increased risk of civil unrest across the country…

Here’s what Facebook plans to do:

  • We won’t accept new political ads in the week before the election.
  • We’ll remove posts that claim that people will get COVID-19 if they take part in voting, and we’ll attach a link to authoritative information about the coronavirus to posts that might use COVID-19 to discourage voting.
  • We will attach an informational label to content that seeks to delegitimize the outcome of the election or discuss the legitimacy of voting methods, for example, by claiming that lawful methods of voting will lead to fraud.
  • If any candidate or campaign tries to declare victory before the final results are in, we’ll add a label to their posts directing people to official results from Reuters and the National Election Pool.

Personally, I think Facebook should have started working on that much earlier this year, previous to when the first caucuses were held. Imagine how much misinformation could have been removed – or at least labeled as such – if Facebook took this kind of action right from the start.

CNBC reported that Facebook users will still see political ads during the week of the election. The ban only affects political ads that were submitted after October 27, 2020. Older political ads won’t be removed.

CNBC also points out that the changes will go into effect after millions have already voted. In states that allow mail-in voting and absentee voting people are expected to cast their ballots before election day. The damage from false information on Facebook will have already swayed user’s views.

Another problem is that Facebook users, including political candidates, will still be able to spread false information right up through election day. CNBC says the only posts specifically banned are ones saying that people will catch COVID-19 if they vote in person.

Facebook Introduces a Forwarding Limit on Messenger

Facebook announced that they are introducing a forwarding limit on Facebook Messenger. From now on, messages can only be forwarded to five people or groups at a time. The purpose of this limitation, according to Facebook, is slow the spread of viral misinformation and harmful content that has the potential to cause real world harm.

We want Messenger to be a safe and trustworthy platform to connect with friends and family. Earlier this year, we introduced features like safety notifications, two-factor authentication, and easier ways to block and report unwanted messages. This new feature provides yet another layer of protection by limiting the spread of viral misinformation or harmful content, and we believe it will help keep people safer online.

It is pretty obvious that viral misinformation is easily spread on social media. Topics like politics, elections, voting information, and COVID-19 tend to be cluttered with misinformation from those who want to trick people into believing something that simply isn’t true. Unfortunately, what happens on social media doesn’t always stay on social media. Those who are fooled into believing misinformation might end up harming themselves or others.

Personally, I think it is smart for Facebook to limit the reach of misinformation on Messenger with a forwarding limit of five people or groups at a time. Nobody wants to get questionable messages from strangers who clearly have an agenda they want to push. The forwarding limit should slow down those who want to spend their free time spreading misinformation. Perhaps they will give up.

That said, it would have been smarter for Facebook to crack down on the spread of misinformation much earlier than today. It is unfortunate that Facebook (and other social media sites) allowed the spread of misinformation on important topics to be shared across their platforms for so long.

Facebook Removed 790 QAnon Groups

Facebook posted an update to how the company addresses movements and organizations tied to violence. They have taken action against Facebook Pages, Groups, and Instagram accounts tied to offline anarchist groups that support violent acts amidst protests, US-based militia organizations, and QAnon. This comes after Twitter started taking action agains QAnon accounts last month.

Facebook states that they already remove content calling for or advocating violence and that they ban organizations that proclaim a violent mission. Facebook has expanded their Dangerous Individuals and Organizations policy to address organizations and movements that have demonstrated significant risks to public safety but do not meet the rigorous criteria to be designated as a dangerous organization and banned from having any presence on Facebook’s platform.

Under this policy expansion, we will impose restrictions to limit the spread of content from Facebook Pages, Groups and Instagram accounts. We will also remove Pages, Groups and Instagram accounts where we identify discussions of potential violence, including when using veiled language and symbols particular to the movement to do so.

Facebook has removed over 790 groups, 100 Pages, and 1,500 ads tied to QAnon from Facebook, blocked over 300 hashtags across Facebook and Instagram, and additionally imposed restrictions on over 1,950 Groups and 440 Pages on Facebook and over 10,000 accounts on Instagram. Those Pages, Groups, and Instagram accounts that have been restricted are still subject to removal as Facebook’s team continues to review their content against the updated policy.

For militia organizations and those encouraging riots, including some who may identify as antifa, Facebook removed over 980 groups, 520 Pages and 160 ads from Facebook. They also restricted over 1,400 hashtags related to these groups and organizations on Instagram.

It should be noted that Facebook says that they will allow people to post content that supports these movements and groups, so long as they do not otherwise violate Facebook’s content policies. What Facebook is doing is an effort to restrict the ability of these groups to organize on Facebook and Instagram.

Oculus Will Require Users to Log In with A Facebook Account

Oculus announced that users will be required to log into Oculus with their Facebook accounts. This change will start in October of 2020. If you aren’t a fan of Facebook, and don’t want to make an account there, you will eventually be unable to use Oculus.

Starting on October of 2020, everyone using an Oculus device for the first time will need to log in with a Facebook account. Existing users who already have an Oculus account will have the option to log in with Facebook and merge their Oculus and Facebook accounts. Existing users who choose not to merge their Oculus and Facebook accounts can continue using their Oculus for two years.

After January 1, 2023, we will end support for Oculus accounts. If you choose not to merge your accounts at that time, you can continue using your device, but full functionality will require a Facebook account. We will take steps to allow you to keep using content you have purchased, though we expect some games and apps may no longer work. This could be because they include features that require a Facebook account or because a developer has chosen to no longer support the app or game you purchased. All future unreleased Oculus devices will require a Facebook account, even if you already have an Oculus account.

Facebook says that it will be possible to log into Oculus with a Facebook account and still create or maintain a unique VR profile. If you don’t want your Oculus friends to find you by your Facebook name, they won’t – if you make it visible to “Only Me” in your Oculus settings.

There are plans for Facebook to enable multiple users to log into the same Oculus device with each one using their own Facebook account. This appears to be aimed at families who want to share the Oculus device. It also may be possible to share an Oculus device with friends.

The Verge reported that Facebook is not rolling out any new ads on the Oculus platform right now. Requiring users to sign in to Oculus with a Facebook account removes the last layer of separation between the two.

To me, it makes Oculus feel even more like a “walled garden” than before. Those who don’t want to make a Facebook account will not have any way to access whatever content Oculus provides. Video game consoles also tend to make their games only accessible on their own platform – with the exception of the ones that are also playable on PC.

The difference is that Facebook has a long history of sketchy behavior in regards to how they treat their users, especially when it comes to security of personal information. I have concerns that everything Facebook users say and do within Oculus will be used by Facebook in unexpected ways.

Facebook Launched Paid Online Events for Small Businesses

Facebook has launched a way for small businesses, creators, educators, and media publishers to earn money from the online events they host on Facebook. Page owners can create an online event, set a price, promote the event, and host the event, all in one place.

According to Facebook, combining marketing, payment and live video, paid online events meet the end-to-end needs of businesses. Pages can host events on Facebook Live to reach broad audiences, and Facebook is testing paid events with Messenger Rooms for more personal and interactive gatherings.

To me, the description Facebook gives this new feature sounds like it was influenced by COVID-19, and the limitations that small businesses are facing as a result. I also think Facebook had another reason for launching this now. Part of their post about it on Facebook Newsroom takes a swipe at Apple, and the company’s 30% App Store Tax.

Facebook makes it clear that they will not collect any fees from paid online events for at least the next year. For transactions on the web, and on Android in countries where Facebook has rolled out Facebook Pay, small businesses will keep 100% of the revenue they generate from paid online events.

We asked Apple to reduce its 30% App Store tax to allow us to offer Facebook Pay so we could absorb all costs for businesses struggling during COVID-19. Unfortunately, they dismissed both our requests and SMBs will only be paid 70% of their hard-earned revenue.

That’s definitely a “dig” at Apple – who is currently facing a lawsuit filed by Epic Games after Apple removed the iOS version of Fortnite from the App Store. The removal came after Epic Games added its own payment processing system into the iOS version of Fortnite. Apple appears to feel that in doing so, Epic Games violated Apple’s App Store guidelines.

It is good that Facebook is waiving the fees on the paid online events that businesses and creators host on Facebook. I find it interesting that Facebook promises to waive those fees for an entire year. It is unclear exactly what happens regarding those fees after that deadline ends.

Interestingly, Facebook appears to be taking this opportunity to try and paint itself as the “good guys” who just want to help out small businesses and creators. They look better than Apple does at the moment. The cynical part of me wonders if Facebook is attempting to use this situation as an opportunity to get out of the 30% App Store tax that Apple requires in order to allow Facebook Pay on the App Store.

Facebook Relaxed the Rules for Conservative Accounts

Facebook has allowed some conservative news outlets and personalities to repeatedly spread false information without facing any of the penalties that doing so would typically result in, NBC News reported. NBC News received this information from leaked materials, which they reviewed.

According to internal discussions from the last six months, Facebook has relaxed its rules so that conservative pages, including those run by Breitbart, former Fox News personalities Diamond and Silk, the nonprofit media outlet PragerU and the pundit Charlie Kirk, were not penalized for violations of the company’s misinformation policies.

Facebook has fact-checking rules that determine the reach of posts and accounts. This is done by third-party fact-checkers from a non-partisan International Fact Checking Network. These fact-checkers review and rate public, newsworthy Facebook and Instagram posts, including ads, with articles, photos, or video.

NBC News reported that Facebook uses “strikes”. This means that a page can post inaccurate information and receive one strike warning before Facebook takes action. Two strikes within 90 days puts an account into “repeat offender” status, which can lead to a reduction in distribution of the account’s content and a temporary block on advertising on the platform.

The leaked information reviewed by NBC News reportedly showed that employees in the misinformation escalations team deleted strikes during the review process that were issued to some conservative partners for posting misinformation over the past six months. Essentially, they were erasing strikes, reportedly with direct oversight from company leadership.

It appears that in some cases, strikes are removed for conservative accounts that Facebook feels is likely to go public about being penalized for posting misinformation. In other cases, it seems like conservative accounts that have a lot of active ads also have their strikes removed.

Usually, I find it difficult to trust anything that comes from leaked documents. But, in this case, NBC News reported that Facebook spokesperson Andy Stone did not dispute the authenticity of the leaked documents, but claimed the leaked documents did not provide the full context of the situation. To me, that response sounds like some of what was in the leaked documents was accurate.