Category Archives: Facebook

Facebook Settles Claims Over Discrimination Against U.S. Workers

Facebook settled claims that it refused to recruit or hire U.S. workers for positions it set aside for temporary visa holders, CNBC reported. According to CNBC, Facebook settled with not only the Department of Labor, but also the Department of Justice. These were two separate lawsuits.The Department of Justice (DOJ) posted a release on its website that shared information about these lawsuits.

The Justice Department’s settlement resolves its claims that Facebook routinely refused to recruit, consider or hire U.S. workers, a group that includes U.S. citizens, U.S. nationals, asylees, refugees and lawful permanent residents, for positions it has reserved for temporary visa holders in connection with the PERM process. Additionally, the Labor Department’s settlement resolves issues it separately identified through audit examinations of Facebook’s recruitment activities related to its PERM applications filed with the Employment and Training Administration’s Office of Foreign Labor Certification (OFLC).

PERM stands for “permanent labor certification program.”

Specifically, the lawsuit alleged that, in contrast to its standard recruitment practices, Facebook used recruiting methods designed to deter U.S. workers who applies to the positions, such as requiring applications to be submitted by mail only; refused to consider U.S. workers who applied to the positions; and hired only temporary visa holders.

According to the Justice Department’s lawsuit, Facebook’s hiring for these positions intentionally discriminated against U.S. workers because of their citizenship or immigration status, in violation of the anti-discrimination provision of the Immigration and Nationality Act (INA).

Under the DOJ’s settlement, Facebook will pay a civil penalty of $4.75 million to the United States, will pay up to $9.5 million to eligible victims of Facebook’s alleged discrimination, and train its employees on the anti-discrimination requirements of the INA. It also must accept electronic resumes or applications from all U.S. workers who apply.

The DOJ says that this civil penalty backpay fund represents the largest fine and monetary award that the Department of Justice ever has recovered in the 35-year history of the INA’s anti-discrimination provision.

Facebook has been in a bit of trouble lately. The Wall Street Journal reported on Facebook documents that had been leaked to the newspaper. A whistleblower shared what she knew about the behind the scenes of Facebook on “60 Minutes”.

Today, Facebook was fined by the Department of Justice for being less than honest regarding hiring workers. It is a small fine, compared to the vast wealth of Facebook – but it still sends a message to Facebook to stop being awful.

Facebook Still Has A Problem With Hate Speech

Facebook’s AI can’t consistently identify first-person shooting videos, racist rants, and the difference between cockfighting and car crashes. This comes from internal Facebook documents that were reviewed by The Wall Street Journal.

On hate speech, the documents show, Facebook employees have estimated the company removes only a sliver of the posts that violate its rules – a low-single-digit percent, they say. When Facebook’s algorithms aren’t certain enough that the content violates the rules to delete it, the platform shows that material to users less often – but the accounts that posted the material go unpunished.

According to The Wall Street Journal, a team of Facebook employees concluded that the AI systems were removing posts that generated 3% to 5% of the views of hate speech on the platform, 0.6% of all content that violated Facebook’s policies against violence and incitement.

Engadget reported that “there’s little doubt that Facebook is engaged in some spin”. In testimony, whistleblower Frances Haugen asserted that Facebook can only catch a “very tiny minority” of offending material. According to Engadget, Haugen also alleged that Facebook resisted implementing safer algorithms and other efforts to minimize hateful and divisive distractions.

Facebook’s VP of Integrity, Guy Rosen, posted “Hate Speech Prevalence Has Dropped by Almost 50% on Facebook”. This was published the same day as The Wall Street Journal article. Here is a small portion of that post:

“Data pulled from leaked documents is being used to create a narrative that the technology we use to fight hate speech is inadequate and that we deliberately misrepresent our progress. This is not true. We don’t want to see hate on our platform, nor do our users or advertisers, and we are transparent about our work to remove it. What these documents demonstrate is that our integrity work is a multi-year journey. While we will never be perfect, our teams continually work to build our systems, identify issues and build solutions.”

According to Facebook, the documents that were sent to The Wall Street Journal “misrepresent” what Facebook is doing. Personally, I find it hard to believe that Facebook is telling the truth.

Facebook Provided Details About the Outage

On October 4, 2021, Facebook suddenly went down. The problem extended to Instagram and WhatsApp. In my opinion, this situation might be a good example of why allowing one giant company to continually purchase its competitors is a bad idea. If those services were independent from each other – the problem that made Facebook inaccessible would not have extended to Instagram and WhatsApp.

On the day of the outage, Facebook tweeted: “We’re aware that some people are having trouble accessing our apps and products. We’re working to get things back to normal as quickly as possible, and we apologize for any inconvenience.”

There is something amusing about Facebook having to resort to Twitter in order to connect to people who could not longer access Facebook’s products.

Yesterday, the Facebook Engineering blog posted an article titled: “More details about the October 4 outage”. It was written by Santosh Janardhan. Here are a few key paragraphs from the blog post:

“…This outage was triggered by the system that manages our global backbone network capacity. The backbone is the network Facebook has built to connect all our computing facilities together, which consists of tens of thousands of miles of fiber-optic cables crossing the globe and linking all our data centers.”

“… The data traffic between all these computing facilities is managed by routers, which figure out where to send all the incoming and outgoing data. And in the extensive day-to-day work of maintaining this infrastructure, our engineers often need to take part of the backbone offline for maintenance – perhaps repairing a fiber line, adding more capacity or updating the router itself.

“This was the source of yesterday’s outage. During one of these routine maintenance jobs, a command was issued with the intention to assess the availability of global backbone capacity, which unintentionally took down all the connections in our backbone network, effectively disconnecting Facebook data centers globally. Our systems are designed to audit commands like these to prevent mistakes like this, but a bug in that audit tool prevented it from properly stopping the command…”

A “bug” in Facebook’s own audit tools crashed Facebook. This situation makes me think of the horror movies where someone is absolutely terrified and calls 911, only to learn the scary calls they had been receiving came from inside the house.

Facebook Denies Instagram is “Toxic for Teens”

Facebook denies claims made by The Wall Street Journal about Instagram being “toxic for teen girls”. In its Newsroom, Facebook posted the following claims:

  •  Contrary to the Wall Street Journal’s characterization, Instagram’s research shows that on 11 of 12 well-being issues, teenage girls who said they struggled with those difficult issues also said that Instagram made them better rather than worse.
  •  This research, like external research on these issues, found teens report having both positive and negative experiences with social media.
  • We do internal research to find out how we can best improve the experience for our teens, and our research has informed product changes as well as new resources.

CNBC reported that Facebook executive Antigone Davis, global head of safety, will testify before the Senate Commerce subcommittee on consumer protection on September 30, 2021. The hearing focuses on The Wall Street Journal’s article that shows Instagram had a negative effect on many teen girls’ mental health.

Personally, it sounds to me like Facebook got caught, and is trying to salvage its reputation before the Senate subcommittee hearing begins.

The Wall Street Journal recently published an article titled: “Facebook Knows Instagram Is Toxic for Teen Girls, Company Documents Show”. According to The Verge, that information came from leaked documents that had been leaked to The Wall Street Journal.

The Verge pointed out some of what The Wall Street Journal’s findings:

  • A study by Facebook of teen Instagram users in the US and UK found that more than 40% of those who reported feeling “unattractive” said the feelings started when using Instagram.
  • Research reviewed by Facebook’s top executives concluded that Instagram was engineered towards greater “social comparison” than rival apps like TikTok and Snapchat. TikTok is focused on performance and Snapchat is uses jokey filters that focus on the face. Instagram spotlights users’ bodies and lifestyles.
  •  “Teens blame Instagram for increases in the rate of anxiety and depression,” said internal research by Facebook presented in 2019, and that “This reaction was unprompted and consistent across all groups”.
  • Facebook found that among the teens who said they had suicidal thoughts, 13 percent of UK users and 6 percent of US users said these impulses could be tracked back to the app.

Facebook Messenger Updated End-to-End Encrypted Chats

Facebook Messenger announced that they are rolling out the option to make voice and video calls end-to-end encrypted on Messenger, along with updated controls for disappearing messages.

People expect their messaging apps to be secure and private, and with these new features, we’re giving them more control over how private they want their calls and chats to be.

Here is a quick look at what’s new:

Option for end-to-end encrypted voice and video calls: Messenger says it offered the option to secure your one-on-one text chats with end-to-end encryption since 2016. Now, they are introducing calling to this chat mode so you can secure your audio and video calls with this same technology, if you choose.

Updated controls over Disappearing Messages: Messenger also updated the expiring message feature within their end-to-end encrypted chats. They updated this mode to provide more options for people in the chat to choose the amount of time before all new messages disappear. From as few as 5 seconds to as long as 24 hours.

Here are some things Messenger says are coming soon:

End-to-end encrypted group chats and calls in Messenger: They will begin testing end-to-end encryption for group chats, including voice and video calls, for friends and family that already have an existing chat thread or are already connected. They will also begin a test for your delivery controls to work with your end-to-end encrypted chats. That way, you can prevent unwanted interactions by deciding who can reach your chat lists, who goes to your requests folder, and who can’t message you at all.

Opt-in end-to-end encryption for Instagram DMs: Messenger will also do a limited test with adults in certain countries that lets them opt-in to end-to-end encrypted messages and calls for one-on-one conversations on Instagram. Similar to how Messenger works today, you need to have an existing chat or be following each other to start an end-to-end encrypted DM. As always, you can block someone you don’t want to talk to or report something to Messenger if it doesn’t seem right.

I find it interesting that this Messenger post appeared shortly after Apple’s controversial decision to scan the iCloud photos of some users hit the news. Messenger, which is part of Facebook, appears to be trying to look like the “good guys” in this situation.

It is important to keep in mind that Facebook (and its extensions) will continue to gather your data and track you. They are giving you the option to use end-to-end encryption on calls and videos, but probably hope you won’t actually opt-in.

UK CMA Raises Concerns Over Facebook’s Takeover of Giphy

The Competition and Markets Authority has provisionally found Facebook’s merger with Giphy will harm competition between social media platforms and remove a potential challenger in the display advertising market.

CMA stated: The merger brings together Facebook, the largest provider of social media sites and displays advertising in the UK, with Giphy, the largest provider of GIFs. If the Competition and Markets Authority’s competition concerns are ultimately confirmed, it could require Facebook to unwind the deal and sell off Giphy in its entirety.

The CMA also stated: This is particularly concerning given Facebook’s existing market power in display advertising – as part of its assessment, the CMA found that Facebook had a share of around 50% of the £5.5 billion display advertising market in the UK.

Stuart McIntosh, chair of the independent inquiry group carrying out the phase 2 investigation, said:

“Millions of people share GIFs every day with friends, family and colleagues, and this number continues to grow. Giphy’s takeover could see Facebook withdrawing GIFs from competing platforms or requiring more user data in order to access them. It also removes a potential challenger to Facebook in the £5.5 billion display advertising market. None of this would be good news for customers.

“While our investigation has shown serious competition concerns, these are provisional. We will now consult on our findings before completing our review. Should we conclude that the merger is detrimental to the market and social media users, we will take the necessary actions to make sure people are protected.”

Variety reported that the deal between Facebook and Giphy was announced in May of 2020, and is valued at $400 million.

A Facebook spokesperson told Variety “We disagree with the CMA’s preliminary findings, which we do not believe to be supported by the evidence. As we have demonstrated, this merger is in the best interests of people and businesses in the U.K. – and around the world – who use Giphy and our services. We will continue to work with the CMA to address the misconception that the deal harms competition.”

That’s a typical response from Facebook every time it is called out on its terrible actions.

Variety reported that that Giphy currently has no employees, revenue, or assets in the UK, meaning that the CMA has no jurisdiction over the deal. The merger is also being assessed by other competition authorities in the UK.

Facebook Removed Some False Information About COVID-19

Facebook said it has removed a network of accounts from Russia that the company linked to a marketing firm which aimed to enlist influencers to push anti-vaccine content about the COVID-19 vaccines, Reuters reported.

According to Reuters, Facebook said it has banned accounts connected to Fazze, a subsidiary of UK-registered marketing firm AdNow, which primarily conducted its operations from Russia, for violating its policy against foreign interference.

Facebook posted information on its Newsroom that included a Summary of July 2021 Findings.

…In July, we removed two networks from Russia and Myanmar. In this report, we’re also sharing an in-depth analysis by our threat intelligence team into one of the operations – a network from Russia linked to Fazze, a marketing firm registered in the UK – to add to the public reporting on this network’s activity across a dozen different platforms…

Facebook removed 79 Facebook accounts, 13 Pages, eight Groups, and 19 Instagram accounts in Myanmar that targeted domestic audiences and were linked to individuals associated with the Myanmar military.

Facebook also removed 65 Facebook accounts and 243 Instagram accounts from Russia that Facebook linked to Fazze, whose operations were primarily conducted from Russia. Fazze is now banned from Facebook’s platform.

The BBC reported that the accounts in the network spread memes that used images from the Planet of the Apes films to give the impression that the vaccine would turn people into monkeys.

Reuters pointed out that false claims and conspiracy theories about COVID-19 and its vaccines have proliferated on social media in recent months. Major tech firms like Facebook have been criticized by U.S. lawmakers and President Joe Biden’s administration, who say the spread of online lies about vaccines is making it harder to fight the pandemic.

Personally, I think it is good that Facebook finally got around to removing (some) misinformation about COVID-19 and vaccines. Doing so could encourage people who are vaccine-hesitant to consider protecting themselves and their loved ones by getting the vaccine. That won’t happen if all they see on Facebook is misinformation.