Tag Archives: Facebook

EFF Calls Facebook’s Campaign Against Apple “Laughable”



The Electronic Frontier Foundation (EFF) has shared its opinion about Facebook’s full-page newspaper ad campaign against Apple’s AppTrackingTransparency feature on iPhones. The EFF described Facebook’s campaign as “laughable”.

Facebook claimed that Apple’s new AppTrackingTransparancy for iOS 14, iPadOS 14, and tvOS 14 “will hurt small businesses who benefit from access to targeted advertising services.” EFF points out that Facebook is not telling you the whole story. Facebook’s complaint, according to EFF, is what Facebook stands to lose if its users learn more about exactly what it and other data brokers are up to behind the scenes.

Bottom line: “The Association of National Advertisers estimates that, when the “ad tech tax” is taken into account, publishers are only taking home between 30 and 40 cents of every dollar [spent on ads]”. The rest goes to third-party data brokers who keep the lights on by exploiting your information, and not to small businesses trying to work within a broken system to reach their customers.

EFF pointed out that small businesses cannot compete with large ad distribution networks on their own. Because the ad industry has promoted this fantasy that targeted advertising is superior to other methods of reaching customers, anything else will inherently command less value on ad markets, EFF reported.

Personally, I think EFF did an excellent job of explaining why Facebook’s “laughable” campaign is a problem. Facebook is worried that Apple’s AppTrackingTransparency feature will hurt Facebook’s chance to make money off the data it collects from its users.

This has nothing to do with an attempt to help small businesses. In my opinion, Facebook realizes that people don’t like to be tracked, and that targeted ads can be creepy. What we are seeing is Facebook having a panic attack about the amount of money they could lose after Apple, by default, prevents apps from collecting and sharing people’s data.


Apple Responded to Facebook’s Anti-Tracking Criticism



Apple has responded to Facebook’s criticisms about Apple’s upcoming iOS 14 update. Specifically, Facebook appears to be angry that the update will require people to opt-in to targeted advertising and tracking.

Apple provided a statement to MacRumors about the update:

We believe that this is a simple matter of standing up for our users. Users should know when their data is being collected and shared across other apps and websites – and they should have the choice to allow that or not. App Tracking Transparency in iOS 14 does not require Facebook to change its approach to tracking users and creating targeted advertising, it simply requires that they give users a choice.

Personally, I think that Apple’s decision to allow users to know when their data is being collected – and to have the choice to allow it or not – is a good one. I live in California, which enacted the California Consumer Privacy Act of 2018 (CCPA). It requires that business that collect a consumer’s personal information must inform consumers about the categories of personal information to be collected and the purposes for which is it used. This must be done before the point of collection. It also allows people to tell a business not to sell their personal information to third parties.

Apple’s upcoming iOS 14 update sounds like it is in compliance with the CCPA. It might also comply with the EU’s General Data Protection Regulation (GDPR), which also regulates data protection.

One of the best things about Apple’s iOS 14 update is that the default setting is to alert all users about how a specific app will use their data. Users don’t have to figure out how to prevent data collection and tracking – Apple already does that for them. That said, if you really want Facebook (and other apps) to collect your data, and sell it to third-parties, you will have the ability to opt-in to that.


Facebook Uses Questionable Claims to Attack Apple



Starting early next year, Apple will enable a change in an upcoming iOS 14 update that Facebook is really angry about. MacRumors reported that the change will require users to grant permission for their activity to be tracked for personalized advertising purposes. Facebook clearly does not want people to be able to opt-out of targeted advertising.

Bloomberg reported that Facebook Inc. purchased a series of full-page newspaper ads in the New York Times, Wall Street Journal, and Washington Post titled: “We’re standing up to Apple for small businesses everywhere”.

In the full-page ad, Facebook claimed: “While limiting how personalized ads can be used does impact larger companies like us, these changes will be devastating to small businesses, adding to the many challenges they face right now.” To me, this is a very tiny acknowledgement that Apple’s iOS 14 update will harm Facebook’s ability to make money.

First off, it is questionable that Facebook calls targeted advertising “personalized ads”. Colorful language is often used to convince people that the thing that they do not want is somehow beneficial to them. This is especially true when a large company tries to persuade you to let them access your data.

Secondly, Facebook is hoping that you will believe that small businesses will become unsustainable if Apple users choose to opt-out of targeted advertising and tracking. That claim is questionable because the update hasn’t launched yet. Right now, there is absolutely no data that Apple’s privacy protection will kill off small businesses.

According to Bloomberg, Facebook is also upset about Apple’s newly launched “nutrition-label” style feature in its App Store. That feature outlines what data third-party apps collect. Bloomberg noted that Facebook may have seen this as an attack on Facebook’s app “given the amount of information it gathers.”


Germany Investigates Linkage Between Oculus and Facebook Network



Germany’s Bundeskartellmt (which TechCrunch translates as Germany’s Federal Cartel Office), has initiated abuse proceedings against Facebook to examine the linkage between Oculus virtual reality products and the social network and Facebook platform.

Andreas Mundt, President of the Bundeskartllmt wrote:

“In the future, the use of the new Oculus glasses requires the user to also have a Facebook account. Linking virtual reality products and the group’s social network in this way could constitute a prohibited abuse of dominance by Facebook. With its social network Facebook holds a dominant position in Germany and is also already an important player in the emerging but growing VR (virtual reality) market. We intend to examine whether and to what extent this tying arrangement will affect competition in both areas of activity.”

In August, Facebook announced that it was changing the name of the VR business it acquired back in 2014 for around $2 billion – and had allowed to operate separately – to “Facebook Reality Labs,” signaling the assimilation of Oculus into its wider social empire, TechCrunch reported.

Also in August, Oculus announced that users would be required to log into Oculus with their Facebook accounts – beginning in October of 2020. Oculus users who did not have a Facebook account, and who did not want to make one, would eventually be unable to use Oculus.

TechCrunch reported that a Facebook spokesperson sent a statement. “While Oculus devices are not currently available for sale in Germany, we will cooperate fully with the Bundeskartellamt and are confident we can demonstrate that there is no basis to the investigation.”

We will have to wait and see what happens with Germany’s investigation into Facebook requiring Oculus users to have a Facebook account. Meanwhile, Oculus users in the United States, who want to continue using Oculus, are required to have a Facebook account. To me, it seems like if you want to use Oculus, you have to be tied to Facebook forever – or lose access.


FTC Sues Facebook for Illegal Monopolization



The Federal Trade Commission (FTC) announced that it has sued Facebook. The FTC alleges that Facebook is illegally maintaining its personal social network monopoly through a years-long course of anticompetitive conduct. The lawsuit comes after a lengthy investigation in cooperation with a coalition of attorneys general of 46 states, the District of Columbia, and Guam.

The FTC is seeking a permanent injunction in federal court that could, among other things: require divestitures of assets, including Instagram and WhatsApp; prohibit Facebook from imposing anticompetitive conditions on software developers; and require Facebook to seek prior notice and approval for future mergers and acquisitions.

A separate lawsuit is led by New York Attorney General Letitia James, who stated that: “The lawsuit alleges that, over the last decade, the social networking giant illegally acquired competitors in a predatory manner and cut services to smaller threats – depriving users from the benefits of competition and reducing privacy protections and services along the way – all in an effort to boost its bottom line through increased advertising revenue.”

The Verge reported that this lawsuit centers on Facebook’s acquisitions, particularly its $1 billion purchase of Instagram in 2011. In addition to its acquisition strategy, the attorneys general allege that Facebook used the power and reach of its platform to stifle user growth for competing services. The Verge also reported that the FTC case cites Facebook’s decision to block Vine’s friend-finding feature after the Twitter acquisition as a particularly flagrant instance of this behavior.

To me, it seems like Facebook could potentially face some legal consequences as a result of one – or both – of these lawsuits. It will be interesting to see what would happen if Facebook is required to seperate itself from Instagram and WhatsApp. If Facebook is required to improve user privacy, I think many people would want to know the specific details about how it will do that.


How Twitter and Facebook Will Handle Trump’s Account After January 20



The New York Times reported some details about how Facebook and Twitter will handle President Trump’s accounts after he is no longer a world leader. Once again, it appears that the two social media companies have very different plans about how to respond to whatever Trump posts after his presidential term is over.

In a recent Senate Judiciary Committee hearing, Senators asked Facebook’s Chief Executive, Mark Zuckerberg, and Twitter’s Chief Executive, Jack Dorsey, questions about their platforms. It appears that the Republicans, and the Democrats, had differing ideas about the topics that were most important to ask questions about.

The New York Times Reported the following:

Jack Dorsey said, “If an account suddenly is not a world leader anymore, that particular policy goes away.” He was referring to Twitter’s current policy of adding a label to Trump’s tweets to indicate that the content of the tweet was disputed or glorified violence. Labeled Tweets cannot be liked or retweeted.

Most Twitter users have to abide by rules that forbid threats, harassment, impersonation, and copyright violations. If someone breaks one (or more) of these rules, they may be required to delete that tweet. Or, their account may be temporarily banned.

According to The New York Times, Mark Zuckerberg said at the hearing that Facebook would not change the way it moderates Trump’s posts after he leaves office. Facebook has labeled some of Trump’s posts in which he made claims that Facebook deemed to be false information. Facebook users could still like and share those posts.

This information is useful for people who currently use Facebook and/or Twitter, as it allows people to decide for themselves which policy they would prefer to see. Those who want to read Trump’s posts after he is no longer President might choose Facebook – who will label misleading posts and leave them up. Those who would prefer their Twitter feed not to be cluttered with reactions to Trump’s misleading Tweets, may stick with Twitter.


Facebook Labels on Trump’s False Claims Didn’t Stop their Spread



Facebook has placed labels on content that includes misinformation about elections. The labels have been added to some of President Trump’s posts in which he made claims about the election that Facebook deemed to be false information. Unfortunately for Facebook (and its users), the labels did almost nothing to stop the spread of false information posted by President Trump.

BuzzFeed News reported that a Facebook employee asked last week whether Facebook had any data about the effectiveness of the labels. A data scientists revealed that the labels do very little to reduce the spread of false content.

The data scientist noted that adding the labels was not expected to reduce the spread of false content. Instead, they are used “to provide factual information in context to the post.” BuzzFeed News reported that the labels on President Trump’s posts (that contained false information) decreased reshares by about 8% and are among some of the posts that got the most engagement on the platform.

Why did that happen? The answer seems obvious, based on what BuzzFeed News reported. Facebook applied some labels to some of President Trump’s posts that contained misinformation about the election. It didn’t actually do anything to prevent users from liking or sharing those posts.

Twitter also applied labels to some of President Trump’s tweets that contained misinformation about elections. The addition of a label disables a user’s attempt to Retweet or Like those tweets. Users can Quote-Tweet them if they want to add their own commentary in regards to a specific labeled tweet.

On November 12, 2020, Twitter posted an update about their work regarding the 2020 U.S. Elections. In it, Twitter stated that they saw an estimated 29% decrease in Quote Tweets of the labeled tweets due in part to a prompt that warned people prior to sharing. In the same post, Twitter stated that they don’t believe that the Like button provides sufficient, thoughtful consideration prior to amplifying tweets.

I find it interesting that Twitter and Facebook appear to have entirely different ideas about what to do about election related content that is misinformation. Both applied labels, but Twitter took things a step further and disabled user’s ability to Like or Retweet those kinds of posts. Neither platform was 100% successful at stopping the spread of misinformation – but Twitter did a better job of it than Facebook.