Tag Archives: Meta

Meta Releases Community Standards Enforcement



Earlier this year, Meta Platforms quietly convened a war room of staffers to address a critical problem: virtually all of Facebook’s top-ranked content was spammy, over sexualized, or generally what the company classified as regrettable, The Wall Street Journal reported.

Meta’s executives and researchers were growing embarrassed that its widely viewed content report, a quarterly survey of the posts with the broadest reach, was consistently dominated by stolen memes, engagement bait and link spam for sketchy online shops, according to documents viewed by The Wall Street Journal and people familiar with the issue.

Meta posted “Integrity and Transparency Reports, Third Quarter 2022”. It was written by Guy Rosen, VP of Integrity. Part of the report included Community Standards Enforcement Report Highlights.

It includes the following:

“Our actions against hate speech-related content decreased from 13.5 million to 10.6 million in Q3 2022 on Facebook because we improved the accuracy of our AI technology. We’ve done this by leveraging data from past user appeals to identify posts that could have been removed by mistake without appropriate cultural context.

“For example, now we can better recognize humorous terms of endearment used between friends, or better detect words that may be considered offensive or inappropriate in one context but not another. As we improve this accuracy, or proactive detection rate for hate speech also decreased from 95.6% to 90.2% in Q3 2022.”

Part of the report states that Meta’s actions against content that incites violence decreased from 19.3 million to 14.4 million in Q3 2022 after their improved AI technology was “better able to recognize language and emojis used in jest between friends.”

For bullying and harassment-related content, Meta’s proactive rate decreased in Q3 2022 from 76.7% to 67.8% on Facebook, and 87.4% to 87.3% on Instagram. Meta stated that this decrease was due to improved accuracy in their technologies (and a bug in their system that is now resolved).

On Facebook, Meta Took Action On:

1.67 million pieces of content related to terrorism, an increase from 13.5 million in Q2. This increase was because non-violating videos were added incorrectly to our media-matching technology banks and were removed (though they were eventually restored).

4.1 million pieces of drug content, an increase from 3.9 million in Q2 2022, due to improvements made to our proactive detection technology.

1.4 billion pieces of spam content, an increase from 734 million in Q2, due to an increased number of adversarial spam incidents in August.

On Instagram, Meta Took Action On:

2.2 million pieces of content related to terrorism, from 1.9 million on Q2, due to non-violating videos added incorrectly to our media-matching technology banks and were removed (though they were eventually restored).

2.5 pieces of drug content, an increase from 1.9 million, due to improvements in our proactive detection technology.

AdWeek reported that Meta removed three networks during the third quarter of this year for violations of Meta’s policies against inauthentic behavior.

According to AdWeek, the first originated in the U.S. and was linked to individuals associated with the U.S. military, and it operated across many internet services and focused on Afghanistan, Algeria, Iran, Iraq, Kazakhstan, Kyrgyzstan, Russia, Somalia, Syria, Tajikistan, Uzbekistan and Yemen.

The second one originated in China and targeted the Czech Republic, the U.S., and, to a lesser extent, Chinese- and French-speaking audiences around the world. The third originated in Russia, primarily targeting Germany but also France, Italy, Ukraine and the U.K.

It appears that Meta has put some effort into cleaning up its platforms. I suppose that’s what happens when Meta’s own researchers were “embarrassed” by what was appearing on Facebook and Instagram!


Facebook To Remove Topics From User’s Profiles



Facebook quietly announced it will remove several categories of information from user profiles, including religious views, political views, addresses and the “Interested in” field, which indicates sexual preference. The change goes into effect on December 1, Gizmodo reported.

“As part of our efforts to make Facebook easier to navigate and use, we’re removing a handful of profile fields: Interested In, Religious Views, Political Views, and Address,” said Emil Vazquez, a Meta spokesperson. “We’re sending notifications to people who have these fields filled out, letting them know these fields will be removed. This change doesn’t effect anyone’s ability to share this information about themselves elsewhere on Facebook.”

According to Gizmodo, the shift reflects Meta’s broader public relations efforts. As a whole, the tech industry wants the public to differentiate between “sensitive” data and what you might call “regular” data. Meta will tell you that Instagram and Facebook don’t use sensitive data for advertising, for example, though that change only came after researchers uncovered serious problems.

Gizmodo also reported: Facebook earned a poor reputation, not just for causing societal problems but because it’s just not cool anymore. Users have been leaving the platform in droves, and even Instagram, Facebook’s younger and slightly hipper sibling, has seen its cache decline.

The company is in dire financial straits as a result, Gizmodo reported. It laid off 11,000 employees just last week. CEO Mark Zuckerberg shifted the entire future of the company, moving away from social media and towards a moonshot goal of building a mixture of virtual and augmented reality he calls “the metaverse”. But in the meantime, Facebook and Instagram are still Meta’s only source of income.

TechCrunch reported that Facebook’s change was first spotted by social media consultant Matt Navarra, who tweeted a screenshot of the notice being sent to users who have these fields filled out. The notice indicates that users’ other information will remain on their profiles along with the rest of their contact and basic information.

According to TechCrunch, Facebook’s decision to get rid of these specific profile fields is part of its efforts to streamline its platform, which currently consists of several features that are somewhat outdated.

It’s worth noting that the information fields that Facebook is choosing to remove are ones that other major social networks don’t offer. Platforms like Instagram and TikTok have simple bios that let users share a little bit about themselves without going into specific details, such as political or religious views.

Engadget reported that other details that you provide Facebook, such as your contact information and relationship status, will persist. You can download a copy of your Facebook data before December 1st if you’re determined to preserve it, and you still have control over who can see the remaining profile content.

I’m seeing what might be a pattern. Facebook is removing information from the profile’s of its users, making it harder for users to have an easy way to self-identify. Twitter is losing employees by the hundreds, which I assume would make it harder for the company to implement new features or enforce its terms of service. Could this be the end of social media as we know it?


Meta Is Preparing To Notify Employees of Large-Scale Layoffs



Meta Platforms Inc. (parent company of Facebook) is planning to begin large-scale layoffs this week, according to people familiar with the matter, in what could be the largest round in a recent spate of tech job cuts after the industry’s rapid growth during the pandemic, The Wall Street Journal reported.

According to The Wall Street Journal, the layoffs are expected to affect many thousands of employees and an announcement is planned to come as soon as Wednesday, according to the people. Meta reported more than 87,000 employees at the end of September. Company officials already told employees to cancel nonessential travel beginning this week, the people said.

The Wall Street Journal also reported that the planned layoffs would be the first broad head-count reductions to occur in the company’s 18-year history. While smaller on a percentage basis than the cuts at Twitter Inc. this past week, which hit about half of that company’s staff, the number of Meta employees expected to lose their jobs could be the largest to date at a major technology corporation in a year that has seen a tech-industry retrenchment.

The New York Times reported that Meta plans to lay off employees this week, three people with knowledge of the situation said, adding that the job cuts were set to be the most significant at the company since it was founded in 2004.

According to The New York Times, it was unclear how many people would be cut and in which departments, said the people, who declined to be identified because they were not authorized to speak publicly. The layoffs were expected by the end of the week. Meta had 87,314 employees at the end of September, up 28 percent from a year ago.

Why the job cuts? The New York Times explained that Meta has been struggling financially for months and has been increasingly clamping down on costs. The Silicon Valley company, which owns Facebook, Instagram, What’s App and Messenger, has spent billions of dollars on the emerging technology of the metaverse, an immersive online world, just as the global economy has slowed and inflation has soared.

In addition, digital advertising – which forms the bulk of Meta’s revenue – has weakened as advertisers have pulled back, affecting many social media companies. Meta’s business has also been hurt by privacy changes that Apple enacted, which have hampered the ability of many apps to target mobile ads to users.

Are we looking at the end of the biggest social media giants? Massive layoffs are never a good sign for any company. It indicates that the company is losing money so quickly that it feels the need to fire a massive amount of its workforce. Meta pretty much did this to itself, by basing the majority of its income on the money it got from ads, which are less lucrative now since Apple’s changes. Twitter, on the other hand, is dealing with the chaos of Elon Musk’s choices.


The Wire Redacted Its Claims About Meta’s Instagram



Recently, The Wire and Meta appeared to be fighting with each other over whether or not Meta was removing posts from Instagram that were reported by a specific person. The Wire made more than one post about this. Meta countered with a post on its Newsroom titled “What The Wire Reports Got Wrong”.

Today, The Wire posted the following statement:

“Earlier this week, The Wire announced its decision to conduct an internal review of its recent coverage of Meta, especially the sources and materials involved in our reporting.

“Our investigation, which is ongoing, does not at yet allow us to take a conclusive view about the authenticity and bona fides of the sources with whom a member of our reporting team says he has been in touch with over an extended period of time. However, certain discrepancies have emerged in the material used. These include the inability of our investigators to authenticate both the email purportedly sent from a***** @fb . Com as well as the email purportedly received from Ujjwal Kumar, (an expert cited in the reporting as having endorsed one of the findings, but who has, in fact, categorically denied sending such an email.) As a result, The Wire believes it is appropriate to retract the stories.

“We are still reviewing the entire matter, including the possibility that it was deliberately sought to misinform or deceive The Wire.

“Lapses in editorial oversight are also being reviewed, as are editorial roles, so that failsafe protocols are put into place ensuring the accuracy of all source-based reporting.

“Given the discrepancies that have come to our attention via our review so far, The Wire will also conduct a thorough review of previous reporting done by the technical team involved in our Meta coverage, and remove the stories from public view until that process is complete…”

Previous to the redaction from The Wire, Meta posted about the situation in its Newsroom blog. Meta wrote the following:

“Two articles published by The Wire allege that a user whose account is cross-checked can influence decisions on Instagram without any review. Our cross-check program was built to prevent potential over-enforcement mistakes and to double-check cases where a decision could require more understanding or there could be a higher risk for a mistake. To be clear, our cross-check program does not grant enrolled accounts the power to automatically have content removed from our platform”.

Meta also wrote that the claims in The Wire’s article was “based on allegedly leaked screenshots from our internal tools. We believe this document is fabricated.” Meta also stated that The Wire’s second story cites emails from a Meta employee – and claimed that the screenshot included in the story has two emails. Meta said both were fake.

It is unclear who, exactly, fed misinformation to The Wire regarding Meta’s Instagram interactions. What is abundantly clear is that the person – or persons – appear to have fabricated what might be false claims. It is unfortunate that The Wire didn’t catch that before publication.


Meta And The Wire Are Fighting With Each Other



There appears to be a spat between Meta and The Wire over information that The Wire reported regarding Meta’s XCheck program. At a glance, it seems as though Meta has disagreements with things that The Wire posted regarding Meta’s XCheck program and its affect on Instagram.

The Wire posted an article titled: “If BJP’s Amit Malviya Reports Your Post, Instagram Will Take It Down – No Questions Asked” on October 10, 2022. In this article, The Wire reported that a specific satire account had some of their Instagram posts removed shortly after they were posted. According to The Wire, the posts that were removed were reported by Instagram user Amit Maviya, who is reportedly “President of Janata Party’s infamous IT Cell.”

On October 12, Meta responded with a post on its Newsroom titled: “What The Wire Reports Got Wrong”. Here is part of that post:

Two articles published by The Wire allege that a user whose account is cross-checked can influence decisions on Instagram without any review. Our cross-check system was built to prevent potential over-enforcement mistakes and to double-check cases where a decision could require more understanding or there could be a higher risk for a mistake. To be clear, our cross-check program does not grant enrolled accounts the power to automatically have content removed from our platform.

While it is legitimate for us to be held accountable for our content decisions, the allegations made by The Wire are false. They contain mischaracterizations of how our enforcement processes work, and rely on what we believe to be fabricated evidence in their reporting. Here is what they got wrong…

According to Meta, the first article from The Wire claims that a cross-check account has the power to remove content from Meta’s platform with no questions asked. Meta says this is false.

Meta claims the article was “based on allegedly leaked screenshots from our internal tools. We believe this document is fabricated”.

Meta states that they did not identify a user regarding the account mentioned in The Wire’s first article.

Meta says the second story cites emails from a Meta employee – and claims that the screenshot included in the story has two emails – both are fake

On October 15, The Wire tweeted a thread pushing back against Meta’s statements. In that thread, The Wire links to a new article that provides more information about why they believe that they reported things correctly.

On October 16, Meta added to its Newsroom post with the following:

This is an ongoing investigation and we will update as it unfolds. At this time, we can confirm that the video shared by The Wire that purports to show an internal Instagram system (and which The Wire claims is evidence that their false allegations are true) in fact depicts an externally-created Meta Workplace account that was deliberately set up with Instagram’s name and brand insignia in order to deceive people.

According to Meta, that Workplace account was set up as a free trial account on Meta’s enterprise Workplace product under the name “Instagram” and using the Instagram brand as its profile picture. It is not an internal account. Meta also claims that the account was created on October 13, and Meta believes it was set up to manufacture evidence to support The Wire’s reporting (which Meta called “inaccurate”).

Personally, I don’t really care who is right or who is wrong, mostly because I don’t have the time to sort through everything. I’ll leave you to decide that for yourself. That said, something in this fight between Meta and The Wire seems fishy to me… but I can’t pinpoint it.


Meta’s Legs Demonstration Video Was Misleading



Earlier this week Meta CEO Mark Zuckerberg took to the stage to demonstrate that, having spent billions of dollars to create a virtual reality universe (Horizon Worlds) that looked like it was from 2004, his company was working on improving that universe to make it look like it was from 2009 instead. Integral to this upgrade was the fact that avatars would no longer be mere floating torsos, but would soon have legs, Kotaku reported.

Luke Plunkett (at Kotaku) included a piece from Ethan Gach’s previous article titled: “Everyone Cheers As Mark Zuckerberg Reveals Feet.”

Today’s model is clearly an extension of that early rendering, and finally brings the VR platform past the likes of Fire Emblem: Awakening on the Nintendo 3DS, another game that lacked legs. And that was with Meta only spending $10 billion this year on the technology. Who knows what another small fortune will bring? If anything, can catapult the Oculus storefront into the green, it’s a burgeoning market for VR feet pics. It might seem like we’re being ridiculous here, but do know that the live chat alongside the virtual audience watching all of this unfold absolutely exploded when Zuckerberg started talking about feet.

The Verge reported that during Meta’s Connect conference on Tuesday, Mark Zuckerberg made a huge announcement: the avatars in the company’s Horizon VR app will be getting legs soon. To demonstrate this groundbreaking technical achievement, Zuckerberg’s digital avatar lifted each leg in the air, then did a jump, while Aigerim Shorman’s avatar kicked into the air.

It turns out that the demonstration by Zuckerberg and Shorman was somewhat misleading. Ian Hamilton (VR Journalist, UploadVR Editor) tweeted: “For those who’ve been wondering about the legs shown in the Connect keynote … Meta: “To enable this preview of what’s to come, the segment featured animations created from motion capture.”

On October 11, Meta mentioned legs on post about Meta Connect 2022. “It may sound like we’re just flipping a switch behind the scenes, but this took a lot of work to make it happen. When your digital body renders incorrectly – in the wrong spot, for instance, it can be distracting or even disturbing, and take you out of the experience immediately. And legs are hard! If your legs are under a desk or even just behind your arms, then the headset can’t see them properly and needs to rely on prediction”.

Meta continued: “We spent a long time making sure Meta Quest 2 could accurately – and reliably – bring your legs into VR. Legs will roll out to World’s first, so we can see how it goes. Then we’ll begin bringing legs into more experiences over time as our technology improves.”

All of this sounds incredibly strange to me. The image at the top of this blog post comes from Meta’s blog. It gives me the impression that the legs that will eventually come to Horizon Worlds could be static and unable to allow an avatar to jump or kick into the air. Some people are going to be very disappointed.


Meta Warns 1M Facebook Users Their Login Info Might Be Compromised



The Washington Post reported that Facebook parent Meta is warning 1 million users that their login information may have been compromised through malicious apps.

According to The Washington Post, Meta’s researchers found more than 400 malicious Android and Apple iOS apps this year that were designed to steal the personal Facebook login information of its users, the company said Friday in blog post. Meta spokesperson Gabby Curtis confirmed that Meta is warning 1 million users who may have been affected by the apps.

Meta said the apps they identified were listed in Apple’s App Store and Google Play Store as games, photo editors, health and safety lifestyle services and other types of apps to trick people into downloading them. Often the malicious app would ask users to “login with Facebook” and later steal their username and password, according to the company.

Meta posted information titled “Protecting People From Malicious Account Compromise Apps” in Meta’s Newsroom. Here is some of what Meta found:

Our security researchers have found more than 400 malicious Android and iOS apps this year that were designed to steal Facebook login information and compromise people’s accounts. These apps were listed on the Google Play Store and Apple’s App Store and disguised as photo editors, games, VPN services, business apps, and other utilities to trick people into downloading them. Some examples include:

  • Photo editors, including those that claim to allow you to “turn yourself into a cartoon”
  • VPNs claiming to boost browsing speed or grant access to blocked content or websites
  • Mobile games falsely promising high-quality 3D graphics
  • Health and lifestyle apps such as horoscopes and fitness trackers
  • Business or ad management apps claiming to provide hidden or unauthorized features not found in official apps by tech platforms.

Meta’s post included a pie chart that shows the categories of the malicious apps. 42.6% were photo editor apps, 15.4% were business utility apps. 14.1% were phone utility apps, 11.7% were game apps, 11.7% were VPN apps, and 4.4% were lifestyle apps.

Meta also stated that malware apps often have telltale signs that differentiate them from legitimate apps. Here are a few things to consider before logging into a mobile app with your Facebook account:

Requiring social media credentials to use the app. Is the app unusable if you don’t provide your Facebook information? For example, be suspicious of a photo-editing app that needs your Facebook login and password before allowing you to use it.

The app’s reputation. Is the app reputable? Look at its download count, ratings and reviews, including the negative ones.

Promised features. Does the app provide the functionality it says it will, either before or after logging in?

I stopped using Facebook a long time ago. Back then, the worst thing that could happen to a person who played games on Facebook was that their strawberries would rot before they could tend them in FarmVille. I cannot help but wonder if the simplicity of the Zynga games that were on early Facebook made people presume that all apps on Facebook were safe.