Category Archives: Facebook

Facebook Tests Telling Users if Their Post was Removed by Automation



Facebook posted its First Quarterly Update on the Oversight Board. Part of the update involves sharing their progress on the board’s non-binding recommendations.

…The board’s recommendations touch on how we enforce our policies, how we inform users of actions we’ve taken and what they can do about it, and additional transparency reporting…

Facebook provided some examples of things it has done that the board recommended:

They launched and continue to test new user experiences that are more specific about how and why they remove content. I think this is a good idea, because there will always be someone new to Facebook that hasn’t been there long enough to learn what is, and is not, allowed.

They made progress on the specificity of their hate speech notifications by using an additional classifier that is able to predict what kind of hate speech is in the content: violence, dehumanization, mocking hate crimes, visual comparison, inferiority, contempt, cursing, exclusion, and/or slurs. Facebook says that people using Facebook in English will now receive more specific messaging when they violate the hate speech policy. The company will roll out more specific notifications for hate speech violations in other languages in the future.

I’m not sure that more specific notifications will influence people to stop posting hate speech. A user who is angry about having a post removed might double-down and post something even worse. It is unclear to me if Facebook is providing any penalty for posting hate speech (other than having the post removed).

Facebook is running tests to assess the impact of telling people whether automation was involved in the enforcement. This likely means that if a user’s post is removed because it broke the rules, and the decision was made by automation – the user will be informed of that.

Personally, I think that last recommendation could be controversial. An individual person might get really angry after learning that their post was removed by automation instead of a human. This might lead to the user trying to convince Facebook to have a human check over that post (in the hopes of getting a more favorable result). It that happens a lot, I suspect that political leaders might add to the conversation – with their own recommendations.


Facebook Oversight Board Considers Posting of Private Residential Addresses



Facebook announced that its Oversight Board has accepted its first policy advisory opinion referral from Facebook. The Board is asked to consider if posting private residential addresses on Facebook is acceptable.

It is important to know that Facebook itself states that the decision made by the Oversight Board is not binding. To me, that sounds like Facebook is giving itself an opportunity to either go along with – or completely ignore – what the Oversight Board determines.

…Access to residential addresses can be an important tool for journalism, civic activism, and other public discourse. However, exposing this information without consent can also create a risk to an individual’s safety and infringe on privacy…

Facebook is asking the Oversight Board for guidance on the following questions:

  • What information sources should render private information “publicly available?” (For instance, should we factor into our decision whether an image of a residence was already published by another publication?”)
  • What information sources should render private information “publicly available?”
  • Should sources be excluded when they are not easily accessible or trustworthy (such as data aggregator websites, the dark web, or public records that cannot be digitally accessed from a remote location?)
  • If some sources should be excluded, how should Facebook determine the type of sources that won’t be considered in making private information “publicly available?”
  • If an individual’s private information is simultaneously posted to multiple places, including Facebook, should Facebook continue to treat it as private information or treat it as publicly available?
  • Should Facebook remove personal information despite its public availability, for example, in news media, government records, or the dark web? That is, does the availability on Facebook of publicly available but personal information, which may include removing news articles that publish such information or individual posts of publicly available government records?

I think we all know that posting someone else’s personal residential information on the internet, without the person’s permission, is wrong. Facebook shouldn’t need to consult an Oversight Board to understand that.


Facebook Tests a Prompt that Twitter Released Last Year



Today, Facebook tweeted about something that could be a new feature on the platform. Facebook will be testing a prompt that was originally released by Twitter in September of 2020. The prompt will encourage Facebook users to actually read an article before posting a comment about it.

Starting today, we’re testing a way to promote more informed sharing of news articles. If you go to share a news article link you haven’t opened, we’ll show a prompt encouraging you to open it and read it before sharing it with others.

The Verge reported that those who are shown this pop-up can choose to continue sharing it without having opened that article if they want to. To me, it sounds like giving users the option of sharing an unread article and/or commenting on it defeats the entire purpose of the prompt.

A Facebook spokesperson told The Verge that the test of this prompt would be rolled out to 6 percent of Android users worldwide. I suppose that the way the prompt is used – or ignored – might determine whether or not Facebook eventually rolls it out to everyone.

There are some flaws with these type of prompts. It appears that the prompt will require a person to actually open the article through Facebook before they can share it. That doesn’t necessarily mean the person is going to read the article. A person who wants to share an article from a less-than-credible website will be permitted to do so.

Based on the article from The Verge, it appears that the primary reason Facebook is testing this prompt is to combat the spread of misinformation. That is a noble goal – but it won’t work if people choose to share articles from tabloids and other questionable sources.


Facebook Oversight Board Upholds Trump’s Suspension



Facebook created an Oversight Board to make the decision on whether or not overturn Trump’s indefinite suspension. Later on, the Oversight Board chose to delay its decision and extended the public comments deadline for this case.

Today, Facebook posted information in their Newsroom titled: “Oversight Board Upholds Facebook’s Decision to Suspend Donald Trump’s Accounts”. It was written by Nick Clegg, VP of Global Affairs and Communications.

Today, the Oversight Board upheld Facebook’s suspension of former US President Donald Trump’s Facebook and Instagram accounts. As we stated in January, we believe our decision was necessary and right, and we’re pleased the board has recognized that the unprecedented circumstances justified the exceptional measure we took.

In the post, Nick Clegg stated that the Oversight Board has not required Facebook to immediately restore Mr. Trump’s accounts. It also has not specified the appropriate duration of the penalty. The Oversight Board called the open-ended nature of the suspension an “indeterminate and standardless penalty”. As a result, Facebook will now consider the Oversight Board’s decision and determine an action that is clear and proportionate.

In the meantime, Mr. Trump’s accounts remain suspended.

It is worth pointing out that the Oversight Board’s determination is not binding, which leaves room for Facebook to either chose to follow it – or to ignore it completely. The Oversight Board included a list of recommendations for Facebook to consider, but the company doesn’t have to adhere to any of them.

One of the recommendations stated that Facebook should “provide users with accessible information on how many violations, strikes, and penalties have been assessed against them, and the consequences that will follow future violations.” Personally, I think that would be a good thing for Facebook to implement. It could make people think twice before posting something that breaks Facebook’s terms of service.


Facebook’s Live Audio Rooms is a Clubhouse Competitor



It was bound to happen. Facebook has announced something called Live Audio Rooms, which they expect to make available to everyone on the Facebook app by the summer of 2021. It is obviously Facebook’s way of competing with Clubhouse.

We believe that audio is a perfect way for communities to engage around topics they care about. We’ll test Live Audio Rooms in Groups, making it available to the 1.8 billion people using Groups every month and the tens of millions of active communities on Facebook.

In addition, Facebook also plans to release Live Audio Rooms on Messenger this summer. It appears Facebook wants to allow people who create a Live Audio Room the ability to record what is said and put that audio into a podcast. Naturally, this is because Facebook has teamed up with Spotify to bring podcasts to Facebook.

The one good thing that Facebook is doing is making the audio from Live Audio Rooms and Soundbites include captions. Giving people who are hearing impaired or deaf the ability to know what is being said in a Live Audio Room makes the experience more accessible.

However, I’m not a fan of Facebook’s plan to enable people to take their Live Audio Rooms chat and upload it to their podcast. Right now, we don’t know for certain if Facebook will require creators of Live Audio Rooms to specifically gain permission to record those who attend. It is possible Facebook might put something in the “fine print” stating that entering a Live Audio Room means you consent to being recorded. In short, if you join a Live Audio Room, you should be mindful of what you say.

I believe that Facebook’s Live Audio Rooms will be a strong competitor to Clubhouse. There have been plenty examples where Clubhouse has failed to protect the privacy of users. People cannot join Clubhouse without receiving an invite from someone who is already on it. Clubhouse requires users to upload their contact list – which feels to me like an invasion of privacy.

Facebook very likely has more users than Clubhouse. If you use Facebook (or its other products) you might be aware that Facebook has been grabbing your data. The biggest advantage Facebook has over Clubhouse is that Facebook will make it as easy as possible for people to host and join Live Audio Rooms.

In my opinion, if you want to start a podcast, you shouldn’t do it on Clubhouse or Facebook. Get a WordPress blog and find a credible podcast host provider. Doing so gives you control over where your podcast “lives”, and does not require you to feed your data to voracious companies (such as Facebook and Clubhouse).


Facebook’s Oversight Board Delays Decision on Trump Suspension



Twitter permanently suspended Trump’s account in January 2021, days after the riot at the U.S. Capitol. At the time, Twitter stated that the reason for the permanent suspension was “due to the risk of further incitement of violence.”.

Facebook suspended Trump’s account for the same reasons. The difference between Facebook and Twitter is that Facebook’s ban was not permanent. At the time, CEO of Facebook, Mark Zuckerberg, said that the platform would extend the block on Trump indefinitely, and for at least two weeks, until “the peaceful transition of power is complete.”

The transition from the Trump-Pence administration to the Biden-Harris administration happened in January of 2021. This puts Facebook into the difficult decision of deciding whether or not to allow Trump to return to the platform. No matter what decision is made, one thing is certain – it will make a lot of people angry.

According to TechCrunch, Facebook has a self-styled and handpicked “Oversight Board” who has the task of deciding whether or not to overturn Trump’s indefinite suspension.

On April 16, 2021, Facebook’s Oversight Board posted a short thread on Twitter. The first tweet said: “(1/2): The Board will announce its decision on the case concerning US President Trump’s indefinite suspension from Facebook and Instagram in the coming weeks. We extended the public comments deadline for this case, receiving 9,000+ responses.”

That second tweet in the thread said: “(2/2): The Board’s commitment to carefully reviewing all comments has extended the case timeline, in line with the Board’s bylaws. We will share more information soon.”

The Hill reported: Facebook requested the board’s recommendation on suspensions when the user is a political leader, meaning the board’s decision on Trump could influence how Facebook handles bans on future leaders in the U.S. and around the world.

Personally, I think that if a public leader has been suspended from a social media platform, there is likely a good reason for it. Trump no longer holds any political office. I think Facebook’s Oversight Board should use the rules that regular people would be held to if they had their Facebook account suspended and asked for the ban to be lifted.


Facebook Allows Users to Call for the Death of Public Figures



Facebook’s bullying and harassment policy explicitly allows for “public figures” to be targeted in ways otherwise banned on the site, including “calls for [their] death”, The Guardian reported. The information comes from internal moderator guidelines that were leaked to The Guardian.

In short, it appears that Facebook thinks it is acceptable to allow public figures to be abused on their platform, including with death threats, simply because the company considers the person to be a public figure. I’m not sure why anyone who fits that definition would stay on Facebook. It seems dangerous.

The company’s definition of public figures is broad. All politicians count, whatever the level of government and whether they have been elected or are standing for office, as does any journalist who is employed “to write/speak publicly”.

Online fame is enough to qualify provided the user has more than 100,000 fans or followers on one of their social media accounts. Being in the news is enough to strip users of protections.

In addition, people who are mentioned in the title, subtitle, or preview of 5 or more news articles or media pieces within the last 2 years are counted as public figures.

Children who are under the age of 13 are never counted as public figures. That description is troubling, as it implies that teenagers 13 or older – who Facebook considers to be a public figure – can be targeted for death threats. That’s definitely not acceptable!

The internal moderator documents state that private individuals cannot be targeted with “calls for death” on Facebook. This is not so for those Facebook considers to be public figures.

According to The Guardian, public figures cannot be “purposefully exposed” to “calls for death”. What does that mean? The documents indicate that calling for the death of a local minor celebrity is acceptable to Facebook so long as the user who is making the threat does not tag the person whom they are threatening.

There are problems with that practice. Obviously, the public person who is the target of a death threat is unlikely to see it unless they have been tagged in the post. That leaves them at risk if the private person who wants them dead decides to act on it offline.

Once Facebook considers a person to be a “public figure” – it sticks. There does not appear to be a way to discover if you are considered one, which makes it impossible to have that designation removed by Facebook.