Category Archives: Facebook

Facebook will Shut Down its VPN App Onavo



TechCrunch reported that Facebook “will end its unpaid market research programs and proactively take its Onavo VPN app off the Google Play store”. This comes after TechCrunch’s investigation about Onavo code being used in a “Facebook Research” app that was sucking up data (including the data from teenagers).

The Onavo Protect app will eventually shut down, and will immediately cease pulling in data from users for market research though it will continue operating as a Virtual Private Network in the short-term to allow users to find a replacement.

TechCrunch pointed out: “To preempt any more scandals around Onavo and the Facebook Research app and avoid Google stepping in to forcibly block the apps, Facebook is now taking Onavo off the Play Store and stopping recruitment of Research testers.”

On the one hand, it is good that Facebook is removing Onavo from the Google Play Store and that it will be shutting it down. It is a step in the right direction. Personally, I do not trust that this move means that Facebook is suddenly going to act more ethically toward its users and their data. What is stopping Facebook from creating a new app that sucks up data as much as Onavo and Facebook Research did?

Facebook could have avoided this whole problem simply by being honest and ethical with its users. Instead, it decided to behave poorly and be sneaky about what it was doing. If TechCrunch hadn’t investigated the Facebook Research app, it seems very likely that Facebook would have continued to use it. Personally, I don’t see why anyone should trust that Facebook will change its data-addicted ways.


House of Commons Report Calls Facebook “Digital Gangsters”



The House of Commons Digital, Culture, Media, and Sport Committee released a report titled “Disinformation and ‘fake news’: Final Report”. The 18-month report is focused on Facebook. It concluded that Facebook broke privacy and competition law and should be subject to statutory regulation.

The Guardian posted a detailed article about the report. It included a quote from it: “Companies like Facebook should not be allowed to behave like ‘digital gangsters’ in the online world, considering themselves to be ahead of and beyond the law.”

The Digital, Culture, Media, and Sport Committee’s report includes the following in its summary:

The big tech companies must not be allowed to expand exponentially, without constraint or proper regulatory oversight. But only governments and the law are powerful enough to contain them. The legislative tools already exist. They must now be applied to digital activity, using tools such as privacy laws, data protection legislation, antitrust and competition law. If companies become monopolies they can be broken up, in whatever sector. Facebook’s handling of personal data, and its use for political campaigns, are prime and legitimate areas for inspection by regulators, and it should not be able to evade all editorial responsibility for the content shared by users across its platforms.

According to The Guardian, the report:

  • Accuses Mark Zuckerberg, Facebook’s co-founder and chief executive, of contempt for parliament in refusing three separate demands for him to give evidence, instead sending junior employees unable to answer the committee’s questions.
  • Warns British electoral law is unfit for purpose and vulnerable to interference by hostile foreign actors, including agents of the Russian government attempting to discredit democracy.
  • Calls on the British government to establish an independent investigation into “foreign influence, disinformation, funding, voter manipulation and the sharing of data” in the 2014 Scottish independence referendum, the 2016 EU referendum and the 2017 general election.

TechCrunch reported that Facebook said it rejected all claims it breached data protection and competition laws. Their article included a statement from Facebook’s UK public policy manager, Karim Palant.

It sounds to me like Facebook might actually face some consequences in the UK. Earlier this month, Germany prohibited Facebook from combining user data from different sources (such as Instagram and WhatsApp). Will the United States decide to regulate Facebook? Or will it allow Facebook to continue – to use the words from the Digital, Culture, Media, and Sport Committee’s report – to behave like ‘digital gangsters’?


Germany Ordered Facebook to Restrict How it Collects and Combines Data



Germany’s Bundeskartellamt has prohibited Facebook from combining user data from different sources. This decision is not yet final, and Facebook has one month to appeal the decision to the Düsseldorf Higher Regional Court.

The decision covers the following data sources:

  • Facebook-owned services like WhatsApp and Instagram can continue to collect data. However, assigning the data to Facebook user accounts will only be possible subject to the users’ voluntary consent. Where consent is not given, the data must remain with the respective service and cannot be processed in combination with Facebook’s data.
  • Collecting data from third-party websites and assigning them to a Facebook user account will also only be possible if users give their voluntary consent.
  • If consent is not given for data from Facebook-owned services and third party websites, Facebook will have to substantially restrict its collection and combining of data. Facebook is to develop proposals for solutions to this effect.

The Bundeskartellamt points out that many users are not aware that use of Facebook is subject to Facebook being able to collect an almost unlimited amount of any type of user data from third party sources, allocate those to the users’ Facebook accounts, and use them for numerous data processing processes.

It notes that third-party websites that include embedded “Like” or “Share” buttons enable data to flow to Facebook. The Bundeskartellamt said: “It is not even necessary, e.g. to scroll over or click on a “Like” button. Calling up a website with an embedded “Like” button will start the data flow. Millions of such interfaces can be encountered on German websites and on apps”.

The BBC reported that the UK-based campaign group Privacy International has said that if the German ruling holds, Facebook should extend the same rights to its other users.

Facebook, as you would expect, disagrees with the Bundeskartellmt. Facebook says it complies with the GDPR. Facebook claims that “using information across services helps to make them better and protect people’s safety”. Facebook says it will make the Bundeskartellmt defend these important arguments in court.


The “Facebook Research” App was Sucking Up Data



TechCrunch released an extremely detailed article about the “Facebook Research” app that is well worth reading. This disturbing app was seeking teenagers (as well as adults aged 35 and younger) and paying users to install a VPN that sucked up all of a user’s phone and web activity.

Since 2016, Facebook has been paying users ages 13 to 35 up to $20 per month plus referral fees to sell their privacy by installing the iOS or Android “Facebook Research” app. Facebook even asked users to screenshot their Amazon order history page. The program is administered through beta testing services Applause, BetaBound, and uTest to cloak Facebook’s involvement, and is referred to in some documentation as “Project Atlas” – a fitting name for Facebook’s effort to map new trends and rivals around the globe.

Since TechCrunch’s article was posted, Apple banned Facebook’s Research VPN app (before Facebook could shut it down).

In addition, Apple revoked Facebook’s Enterprise Certificate – which Facebook had been abusing by using it in the “Facebook Research” App. TechCrunch reported that the result was Facebook’s legitimate employee-only apps have broken.

The sneakiness of the “Facebook Research” app is appalling. It bothers me that Facebook sought out teenagers (ages 13 to 17) and got their attention by emphasizing that the teen would be paid not only to install the data-sucking app, but also if they convinced their friends to use it.

It appears that Facebook required teens to get permission from their parents before they could install the app. It is unclear to me how Facebook would differentiate between the teen’s parent giving permission and the teen simply clicking those buttons themselves.

Last week, Reveal reported that Facebook had encouraged game developers to let children spend money on Facebook games without their parent’s permission.

In my opinion, Facebook appears to have a pattern of preying on teens and children by intentionally obscuring the ways the minors are making money for Facebook. Parents may want to reconsider allowing their kids to use Facebook and its apps.


Facebook Removed Inauthentic Behavior from Russia



Facebook announced that it has removed coordinated inauthentic behavior, this time, coming from Russia. This follows the October 2018 removal of coordinated inauthentic behavior that originated in Iran.

Facebook’s Nathaniel Gletcher, Head of Cybersecurity Policy, wrote: Today we removed 364 Facebook pages and accounts for engaging in coordinated inauthentic behavior as part of a network that originated in Russia and operated in the Baltics, Central Asia, the Caucasus, and Central and Eastern European countries.

Facebook found:

  • Presence on Facebook: 289 Pages and 75 Facebook accounts. They did not find any associated accounts on Instagram.
  • Followers: About 790,000 accounts followed one or more of these Pages.
  • Advertising: Around $135,000 in spending for ads on Facebook paid for in euros, rubles, and US dollars. The first ad ran in October 2013, and the most recent ad ran in January 2019. Facebook has not completed a review of the organic content coming from those accounts.
  • Events: Around 190 events were hosted by these Pages. The first was scheduled for August 2015, and the most recent was scheduled for January 2019. Up to 1,200 people expressed interest in at least one of these events. Facebook cannot confirm whether any of those events actually occurred.

Facebook shared information about their investigation with US law enforcement, the US Congress, other technology companies, and policymakers in other countries.

Facebook also removed 107 Facebook Pages, Groups, and accounts, as well as 41 Instagram accounts, for engaging in coordinated inauthentic behavior as part of a network that originated in Russia and operated in the Ukraine.

Facebook found:

  • Presence on Facebook and Instagram: 26 Pages, 77 Facebook accounts, and 4 Groups, as well as 41 Instagram accounts.
  • Followers: About 180,000 Facebook accounts followed one or more of these Pages, and more than 55,000 accounts followed one or more of these Instagram accounts.
  • Advertising: Around $25,000 in spending for ads on Facebook and Instagram paid for in rubles. The first ad ran in January of 2018, and the most recent ad ran in December of 2018. Facebook has not completed a review of the organic content coming from these accounts.

What troubles me the most about this is how long those Pages and accounts were on Facebook before they were removed. It is clear that people should not trust that everything they see on Facebook is true. People also shouldn’t assume the accounts and Pages they interact with are actually who they are presenting themselves as.


Zuckerberg Mentions Progress in his Year End Post



Mark Zuckerberg wrote a long end-of-the-year post on Facebook in which he said he was proud of the progress they made in 2018. He starts by mentioning what he identified as the most important issues facing the Facebook community. In my opinion, Mark Zuckerberg downplayed the biggest concerns that people have about Facebook.

Mark Zuckerberg identified some of most of the important issues as: preventing election interference, stopping the spread of hate speech and misinformation, making sure people have control of their information, and ensuring that Facebook’s services improve people’s well-being. He said he was proud of the progress Facebook made on those issues.

What changes were made? Mark Zuckerberg says they now have more than 30,000 people working on safety, and that Facebook invests billions of dollars in security yearly. He also said they have multi-year plans to overhaul their systems.

A paragraph in Mark Zuckerberg’s sounds like those efforts are not really going to fix the problems:

That doesn’t mean we’ll catch every bad actor or piece of bad content, or that people won’t find more examples of past mistakes before we improved our systems. For some of these issues, like election interference or harmful speech, the problems can never be fully solved. They’re challenges against sophisticated adversaries and human nature where we must constantly work to stay ahead.

He goes on to say that Facebook has improved its systems for identifying fake accounts and coordinated information campaigns that Facebook identifies as accounting for most of the interference. Mark Zuckerberg says they now remove millions of fake accounts every day. They also partnered with fact-checkers worldwide to identify misinformation and reduce its distribution.

Personally, it feels like Mark Zuckerberg is aware of the problems, but not quite getting that the damage has already been done. Yes, Facebook should be more proactive in efforts to stop the problems Zuckerberg identified. But, it sounds like he’s not convinced that those efforts will be successful. Instead, he’s trying to emphasize how proud he is of the progress Facebook made in 2018.


Facebook Canceled “Common Ground” Project Designed to Reduce Toxic Content



As you may have noticed, discussions of politics on social media can quickly become extremely toxic. According to the Wall Street Journal, Facebook had started working on a feature that was designed to reduce that toxicity and encourage more civil discussion. It was called “Common Ground”, and Facebook canceled the project before launching it.

Deepa Seetheraman wrote in the Wall Street Journal that sources said that Joel Kaplan (Facebook’s vice president of global public policy) and other executives shelved “Common Ground” because of concerns that it could lead to accusations that Facebook was biased against conservatives.

Mr. Kaplan balked when briefed on internal Facebook research that found right-leaning users tended to be more polarized, or less exposed to different points of view, than those on the left, according to people familiar with the analysis. That internal research tracks with the findings of academic studies.

Mr. Kaplan, joined by other Facebook executives, argued that the efforts to mitigate polarization could disproportionately hurt conservative voices, triggering claims of bias and exposing Facebook to allegations of social-engineering.

The Verge explained that “Common Ground” would have changed the way Facebook’s News Feed was ranked. It appears that Joel Kaplan, and other Facebook executives, chose not to use the “Common Ground” tool because they were concerned that “conservative users would be disproportionately impacted by the changes.”

Personally, I find it disappointing that some Facebook executives valued the toxic comments over the opportunity to make Facebook into a place where people could potentially find common ground with each other.