Germany’s Bundeskartellmt (which TechCrunch translates as Germany’s Federal Cartel Office), has initiated abuse proceedings against Facebook to examine the linkage between Oculus virtual reality products and the social network and Facebook platform.
Andreas Mundt, President of the Bundeskartllmt wrote:
“In the future, the use of the new Oculus glasses requires the user to also have a Facebook account. Linking virtual reality products and the group’s social network in this way could constitute a prohibited abuse of dominance by Facebook. With its social network Facebook holds a dominant position in Germany and is also already an important player in the emerging but growing VR (virtual reality) market. We intend to examine whether and to what extent this tying arrangement will affect competition in both areas of activity.”
In August, Facebook announced that it was changing the name of the VR business it acquired back in 2014 for around $2 billion – and had allowed to operate separately – to “Facebook Reality Labs,” signaling the assimilation of Oculus into its wider social empire, TechCrunch reported.
Also in August, Oculus announced that users would be required to log into Oculus with their Facebook accounts – beginning in October of 2020. Oculus users who did not have a Facebook account, and who did not want to make one, would eventually be unable to use Oculus.
TechCrunch reported that a Facebook spokesperson sent a statement. “While Oculus devices are not currently available for sale in Germany, we will cooperate fully with the Bundeskartellamt and are confident we can demonstrate that there is no basis to the investigation.”
We will have to wait and see what happens with Germany’s investigation into Facebook requiring Oculus users to have a Facebook account. Meanwhile, Oculus users in the United States, who want to continue using Oculus, are required to have a Facebook account. To me, it seems like if you want to use Oculus, you have to be tied to Facebook forever – or lose access.
The Federal Trade Commission (FTC) announced that it has sued Facebook. The FTC alleges that Facebook is illegally maintaining its personal social network monopoly through a years-long course of anticompetitive conduct. The lawsuit comes after a lengthy investigation in cooperation with a coalition of attorneys general of 46 states, the District of Columbia, and Guam.
The FTC is seeking a permanent injunction in federal court that could, among other things: require divestitures of assets, including Instagram and WhatsApp; prohibit Facebook from imposing anticompetitive conditions on software developers; and require Facebook to seek prior notice and approval for future mergers and acquisitions.
A separate lawsuit is led by New York Attorney General Letitia James, who stated that: “The lawsuit alleges that, over the last decade, the social networking giant illegally acquired competitors in a predatory manner and cut services to smaller threats – depriving users from the benefits of competition and reducing privacy protections and services along the way – all in an effort to boost its bottom line through increased advertising revenue.”
The Verge reported that this lawsuit centers on Facebook’s acquisitions, particularly its $1 billion purchase of Instagram in 2011. In addition to its acquisition strategy, the attorneys general allege that Facebook used the power and reach of its platform to stifle user growth for competing services. The Verge also reported that the FTC case cites Facebook’s decision to block Vine’s friend-finding feature after the Twitter acquisition as a particularly flagrant instance of this behavior.
To me, it seems like Facebook could potentially face some legal consequences as a result of one – or both – of these lawsuits. It will be interesting to see what would happen if Facebook is required to seperate itself from Instagram and WhatsApp. If Facebook is required to improve user privacy, I think many people would want to know the specific details about how it will do that.
Facebook has placed labels on content that includes misinformation about elections. The labels have been added to some of President Trump’s posts in which he made claims about the election that Facebook deemed to be false information. Unfortunately for Facebook (and its users), the labels did almost nothing to stop the spread of false information posted by President Trump.
BuzzFeed News reported that a Facebook employee asked last week whether Facebook had any data about the effectiveness of the labels. A data scientists revealed that the labels do very little to reduce the spread of false content.
The data scientist noted that adding the labels was not expected to reduce the spread of false content. Instead, they are used “to provide factual information in context to the post.” BuzzFeed News reported that the labels on President Trump’s posts (that contained false information) decreased reshares by about 8% and are among some of the posts that got the most engagement on the platform.
Why did that happen? The answer seems obvious, based on what BuzzFeed News reported. Facebook applied some labels to some of President Trump’s posts that contained misinformation about the election. It didn’t actually do anything to prevent users from liking or sharing those posts.
Twitter also applied labels to some of President Trump’s tweets that contained misinformation about elections. The addition of a label disables a user’s attempt to Retweet or Like those tweets. Users can Quote-Tweet them if they want to add their own commentary in regards to a specific labeled tweet.
On November 12, 2020, Twitter posted an update about their work regarding the 2020 U.S. Elections. In it, Twitter stated that they saw an estimated 29% decrease in Quote Tweets of the labeled tweets due in part to a prompt that warned people prior to sharing. In the same post, Twitter stated that they don’t believe that the Like button provides sufficient, thoughtful consideration prior to amplifying tweets.
I find it interesting that Twitter and Facebook appear to have entirely different ideas about what to do about election related content that is misinformation. Both applied labels, but Twitter took things a step further and disabled user’s ability to Like or Retweet those kinds of posts. Neither platform was 100% successful at stopping the spread of misinformation – but Twitter did a better job of it than Facebook.
In September, Facebook announced that it won’t accept political ads in the week before the US Election. Their ban on political ads would only affect the ones submitted after October 27, 2020.
Recently, Nick Clegg, Facebook’s vice president of global affairs and communication, told French Weekly Journal du Dimanche that a total of 2.2 million ads on Facebook and Instagram have been rejected, and 120,000 posts were withdrawn for attempting to “obstruct voting” in the upcoming US election. In addition, Facebook has been posting warnings on 150 million examples of false information that were on Facebook and Instagram
Facebook has been increasing its efforts to avoid a repeat of events leading up to the 2016 US presidential election, won by Donald Trump, when its network was used for attempts at voter manipulation carried out from Russia.
There were similar problems ahead of Britain’s 2016 referendum on leaving the European Union.
According to Nick Clegg, Facebook has thirty-five thousand employees taking care of the security of Facebook’s platforms and contribute for elections. The company also has partnerships with 70 specialized media, including five in France, on the verification of information. Facebook also uses artificial intelligence that Nick Clegg says has “made it possible to delete billions of posts and fake accounts, even before they are reported by users.”
It appears that Facebook is putting in some effort to remove political misinformation, and also to reject unacceptable political ads. To me, this is a starting point that should have begun before the US primary elections and caucuses. Waiting until right before Election Day to clean up its platforms is too late.
Facebook announced that it has updated its hate speech policy to prohibit any content that denies or distorts the Holocaust. This decision is part of Facebook’s ongoing effort to remove hate speech from its platform.
Today’s announcement marks another step in our efforts to fight hate on our services. Our decision is supported by the well-documented rise in anti-Semitism globally and the alarming level of ignorance about the Holocaust, especially among young people. According to a recent survey of adults in the US aged 18-39, almost a quarter said they believed the Holocaust was a myth, that it had been exaggerated, or that they weren’t sure.
Beginning later this year, Facebook will direct anyone to credible information off Facebook if they search for terms associated with the Holocaust or its denial on Facebook’s platform.
Facebook states that enforcement of these policies cannot happen overnight. They need time to train their reviewers and systems on enforcement of the new policies. To me, it sounds like reporting content that violates this new policy would be welcomed by Facebook. What better way to train reviewers and systems on enforcement than by giving them plenty of examples that (more than likely) are in violation of this new policy?
As a former teacher, I am absolutely astounded that so many people are ignorant about the Holocaust. My assumption was that this historical topic was still being taught to students. As such, it is good that Facebook will direct people who are ignorant about the Holocaust to credible resources where they can learn about it.
Facebook announced some steps it is taking to help secure the integrity of the US elections. According to Facebook, these steps are to encourage voting, connect people to authoritative information, and reduce the risk of post-election confusion.
Mark Zuckerberg made a lengthy post on Facebook about this. Here is a small portion of it:
The US elections are just two months away, and with Covid-19 affecting communities across the country, I’m concerned about the challenges people could face when voting. I’m also worried that with our nation so divided and election results potentially taking days or even weeks to be finalized, there could be an increased risk of civil unrest across the country…
Here’s what Facebook plans to do:
- We won’t accept new political ads in the week before the election.
- We’ll remove posts that claim that people will get COVID-19 if they take part in voting, and we’ll attach a link to authoritative information about the coronavirus to posts that might use COVID-19 to discourage voting.
- We will attach an informational label to content that seeks to delegitimize the outcome of the election or discuss the legitimacy of voting methods, for example, by claiming that lawful methods of voting will lead to fraud.
- If any candidate or campaign tries to declare victory before the final results are in, we’ll add a label to their posts directing people to official results from Reuters and the National Election Pool.
Personally, I think Facebook should have started working on that much earlier this year, previous to when the first caucuses were held. Imagine how much misinformation could have been removed – or at least labeled as such – if Facebook took this kind of action right from the start.
CNBC reported that Facebook users will still see political ads during the week of the election. The ban only affects political ads that were submitted after October 27, 2020. Older political ads won’t be removed.
CNBC also points out that the changes will go into effect after millions have already voted. In states that allow mail-in voting and absentee voting people are expected to cast their ballots before election day. The damage from false information on Facebook will have already swayed user’s views.
Another problem is that Facebook users, including political candidates, will still be able to spread false information right up through election day. CNBC says the only posts specifically banned are ones saying that people will catch COVID-19 if they vote in person.
Facebook announced that they are introducing a forwarding limit on Facebook Messenger. From now on, messages can only be forwarded to five people or groups at a time. The purpose of this limitation, according to Facebook, is slow the spread of viral misinformation and harmful content that has the potential to cause real world harm.
We want Messenger to be a safe and trustworthy platform to connect with friends and family. Earlier this year, we introduced features like safety notifications, two-factor authentication, and easier ways to block and report unwanted messages. This new feature provides yet another layer of protection by limiting the spread of viral misinformation or harmful content, and we believe it will help keep people safer online.
It is pretty obvious that viral misinformation is easily spread on social media. Topics like politics, elections, voting information, and COVID-19 tend to be cluttered with misinformation from those who want to trick people into believing something that simply isn’t true. Unfortunately, what happens on social media doesn’t always stay on social media. Those who are fooled into believing misinformation might end up harming themselves or others.
Personally, I think it is smart for Facebook to limit the reach of misinformation on Messenger with a forwarding limit of five people or groups at a time. Nobody wants to get questionable messages from strangers who clearly have an agenda they want to push. The forwarding limit should slow down those who want to spend their free time spreading misinformation. Perhaps they will give up.
That said, it would have been smarter for Facebook to crack down on the spread of misinformation much earlier than today. It is unfortunate that Facebook (and other social media sites) allowed the spread of misinformation on important topics to be shared across their platforms for so long.