Category Archives: Facebook

Thief Stole Payroll Data of Facebook Employees



Facebook has a history of not being very good at protecting user’s data. This time, things are a bit different. Bloomberg reported that a thief stole payroll data for thousands of Facebook employees. The unexpected thing is that the thief did not obtain it by hacking.

According to Bloomberg, the personal information for tens of thousands of Facebook workers was compromised last month when a thief stole several corporate hard drives from an employee’s car. Why did the employee have hard drives in his car? That is a question that hasn’t been answered yet.

What is known is that the employee was a member of Facebook’s payroll department. The employee was not supposed to take the hard drives outside the office. Apparently, Facebook has taken some disciplinary action against the employee.

A Facebook spokeswoman told Bloomberg: “We worked with law enforcement as they investigated a recent car break-in and theft of an employee’s bag containing company equipment with employee payroll information stored on it.” The Facebook spokeswoman described the situation as “a smash and grab crime”.

It is a really strange situation. The car break-in happened on November 17, 2019. That puts it a little bit before the Thanksgiving holiday when many people shop for holiday presents (and leave them in their cars). I suspect that the thief had no idea what was in the bag when they stole it.

Bloomberg reported that Facebook started alerting employees affected by the situation on December 13, 2019. The employees were encouraged to notify their banks, and were offered a subscription to an identity theft monitoring service.


Facebook Complied with Singapore’s “Fake News” Law



Facebook issued a “fake news” notice on a post made by The States Times Review at the request of the Singapore government. According to the BBC, Facebook said it “is legally required to tell you that the Singapore government says this post has false information.” Singapore claimed the post had “scurrilous accusations”.

The BBC reported that the addition of the “fake news” label was added to the bottom of the original post. The post itself was not altered. The correction label is only visible to Facebook’s Singapore users.

The Singapore law is called the Protection from Online Falsehoods and Manipulation bill. It went into effect in October of this year. It allows the Singapore government to order online platforms to remove and correct whatever it considers to be false statements that are “against the public interest”. A person guilty of breaking this law could be heavily fined and face a prison sentence of up to five years.

The same law also bans the use of fake accounts or bots to spread “fake news”. The penalty for this is up to S$1m ($733,700) and a jail term of up to 10 years.

Reuters reported that the Singapore government initially ordered the Facebook user who runs the States Times Review blog, Alex Tan, to issue a correction on the post. The article reportedly contains accusations about the arrest of a whistleblower and election rigging.

Alex Tan, who does not live in Singapore, and says he is an Australian citizen, refused to post the requested correction notice. So, the Singapore government required Facebook to do it. According to Reuters, authorities said that Alex Tan is now under investigation.

Personally, I find this terrifying. In this situation, the government of Singapore used one if its laws on a person who not only does not live in Singapore, but also is a citizen of Australia.

I do not understand why Facebook was so fast to do what the Singapore government wanted it to, especially considering that Facebook refuses to fact-check (or apply a “fake news” warning) on political advertisements from the United States. It is clear that Facebook does not mind “fake news” from other countries – so why does Singapore have so much power over what Facebook puts that label on?


Facebook Introduces Facebook Pay



Facebook is introducing Facebook Pay. The company describes it as: “a convenient, secure and consistent payment experience across Facebook, Messenger, Instagram, and WhatsApp.”

Facebook Pay will begin rolling out on Facebook and Messenger this week in the US for fundraisers, in-game purchases, event tickets, person-to-person payments on Messenger and purchases from select Pages and businesses on Facebook Marketplace. And, over time, we plan to bring Facebook Pay to more people and places, including for use across Instagram and WhatsApp.

Facebook points out that Facebook Pay is built on existing financial infrastructure and partnerships, and is separate from the Calibra wallet which will run on the Libra network.

That is probably a good decision on Facebook’s part, because Libra has had several companies that were founding members drop out. I don’t think anyone should trust that Libra will be stable until or unless it gets additional companies to sponsor it.

But, this doesn’t necessarily mean that Facebook Pay is a good idea. The Verge reported in October that PayPal, Visa, Mastercard, Stripe, Mercado Pago, and Ebay all dropped out of the Libra project. To me, it seems like a long-shot that the companies who pulled out of Libra would turn around and attach themselves to Facebook Pay.

But even if they did, and other credit card companies also decided to get on board with Facebook Pay, that brings up another problem. How much do you trust Facebook with your credit card number? Earlier this year, the FTC imposed a $5 billion penalty on Facebook and required the company to boost its accountability and transparency. The FTC is also not thrilled with Facebook’s situation with Cambridge Analytica.


Facebook and YouTube are Removing Alleged Name of Whistleblower



It is stunning how much damage people can do by posting the (potential) name of a whistleblower on social media, and having that name be passed around. This poses a dilemma for social media platforms. Both Facebook and YouTube are deleting content that includes the alleged name of the whistleblower that sparked a presidential impeachment inquiry. Twitter is not.

The New York Times reported a statement they received in an email from a Facebook spokeswoman:

“Any mention of the potential whistleblower’s name violates our coordinating harm policy, which prohibits content ‘outing of witness, informant or activist’,” a Facebook spokeswoman said in an emailed statement. “We are removing any and all mentions of the potential whistleblower’s name and will revisit this decision should their name be widely published in the media or used by public figures in debate.”

The New York Times reported that an article that included the alleged name of the whistleblower was from Brietbart. This is interesting, because Breitbart is among the participating publications that Facebook included in Facebook’s “high quality” news tab. (Other publications include The New York Times, the Washington Post, Wall Street Journal, BuzzFeed, Bloomberg, ABC News, Chicago Tribune and Dallas Morning News.) Facebook has been removing that article, which indicates that the company does not feel the article is “high quality”.

CNN reported that a YouTube spokesperson said videos mentioning the potential whistleblower’s name would be removed. The spokesperson said YouTube would use a combination of machine learning and human review to scrub the content. The removals, the spokesperson said, would affect the titles and descriptions of videos as well as the video’s actual content.

The Hill reported that Twitter said in a statement that it will remove posts that include “personally identifiable information” on the alleged whistleblower, such as his or her cell phone number or address, but will keep up tweets that mention the name.


Facebook will not Fact-Check Political Ads



Those who use Facebook should view the political ads they see on the social media platform with a healthy dose of skepticism. CNN reported that Facebook will not fact-check political ads.

That means individual people will need to do their own research on whatever content those type of ads contain. Sadly, I don’t think that most people will bother to do their own fact-checking, especially for political ads that spread misinformation that matches the person’s political leanings.

Facebook released its decision to not fact-check political speech in September of 2019. Facebook stated that it does not believe it is appropriate for them “to referee political debates and prevent a politician’s speech from reaching its audience and being subject to public scrutiny.”

That’s why Facebook exempts politicians from our third-party fact-checking program. We have had this policy on the books for over a year now, posted publicly on our site under eligibility guidelines. This means we will not send organic content or ads from politicians to our third-party fact-checking partners for review. However, when a politician shares previously debunked content including links, videos and photos, we plan to demote that content, display related information from fact-checkers, and reject its inclusion in advertisements.

That said, Facebook doesn’t appear to be adhering to the part about demoting ads that contain previously debunked content. The New York Times reported in October 2019 that a 30-second ad released by the Trump campaign provided misinformation about Joe Biden, and the impeachment inquiry into President Trump.

According to The New York Times, the Biden campaign asked Facebook to take down that ad. Facebook responded to the Biden campaign by saying the ad had been viewed five million times on the site, and declaring that the ad did not violate company policies.

Facebook’s decision to opt-out of fact-checking political ads extends to the UK. According to CNN, Facebook will not fact-check ads run by British political parties or the thousands of candidates running for election to the House of Commons. This comes as the UK is preparing for a historic December election regarding Brexit.


Facebook Introduces Facebook News



Facebook announced that it is starting to test Facebook News, which is described as “a dedicated place for news on Facebook”, to a subset of people in the United States. The initial test showcases local original reporting from the largest major metro areas of the country, beginning with New York, Los Angeles, Dallas-Fort Worth, Philadelphia, Houston, Washington D.C., Miami, Atlanta, and Boston.

How does Facebook decide which publishers to include? The announcement provides an explanation.

They need to be in our News Page Index, which we developed in collaboration with the industry to identify news content. They also need to abide by Facebook’s Publisher Guidelines, these include a range of integrity signals in determining product eligibility, including misinformation – as identified based on third-party fact checkers – community standards violations (e.g. hate speech), clickbait, engagement bait, and others…. Lastly, they must serve a sufficiently large audience, with different thresholds for the four categories of publishers.

Facebook says it talked to news organizations about what they’d like to see included in a news tab, how their stories should be presented, and what analytics to provide. Facebook also talked to people and publishers, and identified key features to make Facebook news valuable.

Those key features are:

Today’s Stories
– chosen by a team of journalists to catch you up on the news throughout the day.

Personalization – based on the news you read, share and follow, so you can find new interests and topics and Facebook News is fresh and interesting every time you open it.

Topic sections – to dive deeper into business, entertainment, health, science & tech, and sports.

Your Subscriptions – a section for people who have linked their paid news subscriptions to their Facebook account.

Controls
– to hide articles, topics, and publishers you don’t want to see.

I find the part called “Your Subscriptions” interesting. I don’t use Facebook, so it never occurred to me that some people are buying a subscription to their favorite news sites and connecting it to their Facebook account. I cannot help but wonder how that affects the news site if people choose to start accessing it only through Facebook.


Facebook Removed Coordinated Inauthentic Behavior from Iran and Russia



Facebook announced that they removed four separate networks of accounts, Pages and Groups for engaging in inauthentic behavior on Facebook and Instagram. According to Facebook, three of those networks of accounts originated in Iran, and one originated in Russia.

Facebook stated that all of these operations created networks of accounts to mislead others about who they were and what they were doing. Facebook has shared its findings with law enforcement, policymakers, and industry partners. In addition, Facebook shared some samples of content that had been posted by some of those Pages. Those who are interested can view it on the Facebook Newsroom announcement.

We’re constantly working to detect and stop this type of activity because we don’t want our services to be used to manipulate people. We’re taking down these Pages, Groups, and accounts based on their behavior, not the content they posted. In each of these cases, the people behind this activity coordinated with one another and used fake accounts to misrepresent themselves, and that was the basis for our action.

Here are some details about what Facebook found so far:

  • They removed 93 Facebook accounts, 17 Pages, and four Instagram accounts for violating Facebook’s policy against coordinated inauthentic behavior. Facebook states that this activity originated in Iran and focused primarily on the United States, and some on French-speaking audiences in North Africa.
  • The individuals behind this activity used compromised and fake accounts – some of which had already been disabled by Facebook’s automated systems. Those accounts were used to masquerade as locals, manages Pages, join Groups, and to drive people to off-platform domains connected to Facebook’s previous investigation into the Iran-linked “Liberty Front Press”.
  • The Page admins and account owners typically posted about local political news and geopolitics including topics like public figures in the US, politics in the US and Israel, support of Palestine and conflict in Yemen.
  • About 7,700 accounts followed one or more of these Pages, and around 145 people followed one or more of these Instagram accounts.

While it is good that Facebook is making an effort to remove fake accounts and inauthentic behavior, it isn’t enough. This keeps happening. Those who use Facebook or Instagram need to be smarter about what accounts they follow, Groups they join, or Pages they interact with. Never follow an off-site link to a website that you haven’t heard of. It could be leading you to “fake news” designed to make you feel outrage.