Yesterday, The New York Times reported that Mike Bloomberg is working with Meme 2020 for the purpose of having the company make memes that support Bloomberg’s presidential campaign. Today, The Verge reported that Facebook will allow the memes for political campaigns, so long as the posts are clearly identified as ads. The memes will not be placed in Facebook’s political Ad Library.
“Branded content is different from advertising, but in either case we believe it’s important people know when they’re seeing paid content on our platforms,” a Facebook spokesperson told The Verge. “We’re allowing US-based political candidates to work with creators to run this content, provided the political candidates are authorized and the creators disclose any paid partnerships through our branded content tools.”
Personally, I wouldn’t have guessed that out of all the people who are running for president it would be Mike Bloomberg who decided to pay influencers to make memes about him. I don’t think there are too many 77-year-olds who understand what memes are, how fast they spread, or what they mean. I’d like to hear the story of how Bloomberg came to the conclusion that memes were exactly what his campaign needed.
But, that’s not the weirdest thing about this situation. According to The Verge, The Meme 2020 project is part of Jerry Media, the promoter behind the infamous Fyre Festival. Meme 2020 is led by Mick Purzycki, the executive director of Jerry Media. What could possibly go wrong?
TechCrunch reported that Facebook has acquired Scape Technologies. The company was founded in 2016 and is located in Shoreditch, London. Scape Technologies is building a cloud-based Vision Engine that allows camera devices to understand their environment, using computer vision.
Rather than relying on 3D maps built and stored locally, Scape’s Vision Engine builds and references 3D maps in the cloud, allowing devices to tap into a ‘shared understanding’ of an environment.
Initially focused on augmented reality, Scape’s first product is an SDK that allows AR content to be anchored to specific locations, outside and at an unprecedented scale.
A Facebook spokesperson confirmed to both TechCrunch, and Engadget: “We acquire smaller tech companies from time to time. We don’t always discuss our plans.” As such, we are left to speculate about why Facebook wanted to acquire Scape Technologies, and what the companies will create together.
Scape Technologies describes its Vision Engine as being built from scratch to process imagery from any source, resulting in a 3D representation of the environment. After the Vision Engine has created a 3D map, their ‘Visual Positioning Service’ determines the precise position of the camera devices, in the cloud. This allows Scape Technologies to achieve higher scalability and performance than any other approach.
To me, it sounds like the Vision Engine could be used to make a virtual reality game of some kind. In 2014, Facebook acquired Oculus VR for $2 billion.
Or, maybe Vision Engine could be used for other purposes. Scape Technologies says that its visual positioning service is currently available within London, but more cities will be announced shortly. They are also working on a public API that will allow any device to determine its location, given an image as an input, regardless of what use-case or platform you are targeting.
To me, this sounds like the Vision Engine could potentially be used for surveillance or law enforcement. That makes me very uncomfortable. Again, this is all speculation, and we will have to wait and see what happens.
Facebook has a history of not being very good at protecting user’s data. This time, things are a bit different. Bloomberg reported that a thief stole payroll data for thousands of Facebook employees. The unexpected thing is that the thief did not obtain it by hacking.
According to Bloomberg, the personal information for tens of thousands of Facebook workers was compromised last month when a thief stole several corporate hard drives from an employee’s car. Why did the employee have hard drives in his car? That is a question that hasn’t been answered yet.
What is known is that the employee was a member of Facebook’s payroll department. The employee was not supposed to take the hard drives outside the office. Apparently, Facebook has taken some disciplinary action against the employee.
A Facebook spokeswoman told Bloomberg: “We worked with law enforcement as they investigated a recent car break-in and theft of an employee’s bag containing company equipment with employee payroll information stored on it.” The Facebook spokeswoman described the situation as “a smash and grab crime”.
It is a really strange situation. The car break-in happened on November 17, 2019. That puts it a little bit before the Thanksgiving holiday when many people shop for holiday presents (and leave them in their cars). I suspect that the thief had no idea what was in the bag when they stole it.
Bloomberg reported that Facebook started alerting employees affected by the situation on December 13, 2019. The employees were encouraged to notify their banks, and were offered a subscription to an identity theft monitoring service.
Facebook issued a “fake news” notice on a post made by The States Times Review at the request of the Singapore government. According to the BBC, Facebook said it “is legally required to tell you that the Singapore government says this post has false information.” Singapore claimed the post had “scurrilous accusations”.
The BBC reported that the addition of the “fake news” label was added to the bottom of the original post. The post itself was not altered. The correction label is only visible to Facebook’s Singapore users.
The Singapore law is called the Protection from Online Falsehoods and Manipulation bill. It went into effect in October of this year. It allows the Singapore government to order online platforms to remove and correct whatever it considers to be false statements that are “against the public interest”. A person guilty of breaking this law could be heavily fined and face a prison sentence of up to five years.
The same law also bans the use of fake accounts or bots to spread “fake news”. The penalty for this is up to S$1m ($733,700) and a jail term of up to 10 years.
Reuters reported that the Singapore government initially ordered the Facebook user who runs the States Times Review blog, Alex Tan, to issue a correction on the post. The article reportedly contains accusations about the arrest of a whistleblower and election rigging.
Alex Tan, who does not live in Singapore, and says he is an Australian citizen, refused to post the requested correction notice. So, the Singapore government required Facebook to do it. According to Reuters, authorities said that Alex Tan is now under investigation.
Personally, I find this terrifying. In this situation, the government of Singapore used one if its laws on a person who not only does not live in Singapore, but also is a citizen of Australia.
I do not understand why Facebook was so fast to do what the Singapore government wanted it to, especially considering that Facebook refuses to fact-check (or apply a “fake news” warning) on political advertisements from the United States. It is clear that Facebook does not mind “fake news” from other countries – so why does Singapore have so much power over what Facebook puts that label on?
Facebook is introducing Facebook Pay. The company describes it as: “a convenient, secure and consistent payment experience across Facebook, Messenger, Instagram, and WhatsApp.”
Facebook Pay will begin rolling out on Facebook and Messenger this week in the US for fundraisers, in-game purchases, event tickets, person-to-person payments on Messenger and purchases from select Pages and businesses on Facebook Marketplace. And, over time, we plan to bring Facebook Pay to more people and places, including for use across Instagram and WhatsApp.
Facebook points out that Facebook Pay is built on existing financial infrastructure and partnerships, and is separate from the Calibra wallet which will run on the Libra network.
That is probably a good decision on Facebook’s part, because Libra has had several companies that were founding members drop out. I don’t think anyone should trust that Libra will be stable until or unless it gets additional companies to sponsor it.
But, this doesn’t necessarily mean that Facebook Pay is a good idea. The Verge reported in October that PayPal, Visa, Mastercard, Stripe, Mercado Pago, and Ebay all dropped out of the Libra project. To me, it seems like a long-shot that the companies who pulled out of Libra would turn around and attach themselves to Facebook Pay.
But even if they did, and other credit card companies also decided to get on board with Facebook Pay, that brings up another problem. How much do you trust Facebook with your credit card number? Earlier this year, the FTC imposed a $5 billion penalty on Facebook and required the company to boost its accountability and transparency. The FTC is also not thrilled with Facebook’s situation with Cambridge Analytica.
It is stunning how much damage people can do by posting the (potential) name of a whistleblower on social media, and having that name be passed around. This poses a dilemma for social media platforms. Both Facebook and YouTube are deleting content that includes the alleged name of the whistleblower that sparked a presidential impeachment inquiry. Twitter is not.
The New York Times reported a statement they received in an email from a Facebook spokeswoman:
“Any mention of the potential whistleblower’s name violates our coordinating harm policy, which prohibits content ‘outing of witness, informant or activist’,” a Facebook spokeswoman said in an emailed statement. “We are removing any and all mentions of the potential whistleblower’s name and will revisit this decision should their name be widely published in the media or used by public figures in debate.”
The New York Times reported that an article that included the alleged name of the whistleblower was from Brietbart. This is interesting, because Breitbart is among the participating publications that Facebook included in Facebook’s “high quality” news tab. (Other publications include The New York Times, the Washington Post, Wall Street Journal, BuzzFeed, Bloomberg, ABC News, Chicago Tribune and Dallas Morning News.) Facebook has been removing that article, which indicates that the company does not feel the article is “high quality”.
CNN reported that a YouTube spokesperson said videos mentioning the potential whistleblower’s name would be removed. The spokesperson said YouTube would use a combination of machine learning and human review to scrub the content. The removals, the spokesperson said, would affect the titles and descriptions of videos as well as the video’s actual content.
The Hill reported that Twitter said in a statement that it will remove posts that include “personally identifiable information” on the alleged whistleblower, such as his or her cell phone number or address, but will keep up tweets that mention the name.
Those who use Facebook should view the political ads they see on the social media platform with a healthy dose of skepticism. CNN reported that Facebook will not fact-check political ads.
That means individual people will need to do their own research on whatever content those type of ads contain. Sadly, I don’t think that most people will bother to do their own fact-checking, especially for political ads that spread misinformation that matches the person’s political leanings.
Facebook released its decision to not fact-check political speech in September of 2019. Facebook stated that it does not believe it is appropriate for them “to referee political debates and prevent a politician’s speech from reaching its audience and being subject to public scrutiny.”
That’s why Facebook exempts politicians from our third-party fact-checking program. We have had this policy on the books for over a year now, posted publicly on our site under eligibility guidelines. This means we will not send organic content or ads from politicians to our third-party fact-checking partners for review. However, when a politician shares previously debunked content including links, videos and photos, we plan to demote that content, display related information from fact-checkers, and reject its inclusion in advertisements.
That said, Facebook doesn’t appear to be adhering to the part about demoting ads that contain previously debunked content. The New York Times reported in October 2019 that a 30-second ad released by the Trump campaign provided misinformation about Joe Biden, and the impeachment inquiry into President Trump.
According to The New York Times, the Biden campaign asked Facebook to take down that ad. Facebook responded to the Biden campaign by saying the ad had been viewed five million times on the site, and declaring that the ad did not violate company policies.
Facebook’s decision to opt-out of fact-checking political ads extends to the UK. According to CNN, Facebook will not fact-check ads run by British political parties or the thousands of candidates running for election to the House of Commons. This comes as the UK is preparing for a historic December election regarding Brexit.