Category Archives: Meta

John Carmack Is Leaving Meta



John Carmack, a pioneer of virtual reality technology, is leaving Meta after more than eight years at the company, according to an internal post reviewed by The New York Times.

The New York Times reported that in the post, which was written by Mr. Carmack, the technologist criticized his employer. He said Meta, which is in the midst of transitioning from a social networking company to one focused on the immersive world of the metaverse, was operating at “half the effectiveness” and has “a ridiculous amount of people and resources, but we constantly self-sabotage and squander the effort.”

“It’s been a struggle for me,” Mr. Carmack wrote in the post, which published on an internal forum this week. “I have a voice at the highest levels here, so it feels like I should be able to move things, but I’m evidently not persuasive enough.”

Mr. Carmack was the former chief technology officer of Oculus, the virtual reality company that Facebook bought for $2 billion in 2014. Mr. Carmack was one of the most influential voices leading the development of V.R. headsets. He stayed with Facebook after Mark Zuckerberg, the chief executive, decided to shift the company last year to focus on the meta verse and renamed Facebook as Meta.

According to The New York Times, Mr. Carmak’s post, which said he was ending his decade in V.R., concluded by saying he had “wearied of the fight” and would focus on his own start-up. (He announced in August that his artificial intelligence firm Keen Technologies, had raised $20 million.)

This week, Mr. Carmack testified in a court hearing over the Federal Trade Commission’s attempt to block Meta’s purchase of Within, the virtual reality start-up behind a fitness game called Supernatural. The agency has argued that the tech giant will snuff out competition in the nascent meta verse if it is allowed to complete the deal.

CNN reported that Carmack was celebrated for his work developing Wolfenstein 3D, Quake and Doom, and co-founded video game company id Software. He was an early advocate for virtual reality, though it was not uncommon for him to criticize Meta.

When asked for comment by CNN, Meta pointed to Carmack’s post and a tweet from CTO Andrew Bosworth.

“It is impossible to overstate the impact you’ve had on our work and the industry as a whole,” Bosworth tweeted. “Your technical prowess is widely known, but it is your relentless focus on creating value for people that we will remember most. Thank you and see you in VR.”

CBS News reported that John Carmack cut his ties with Meta Platforms, a holding company created last year by Facebook founder Mark Zuckerberg, in a Friday letter that vented his frustration as he stepped down as an executive consultant in virtual reality.

“There is no way to sugar coat this; I think our organization is operating at half the effectiveness that would make me happy,” Carmak wrote in the letter, which he shared on Facebook. “Some may scoff and contend we are doing just fine, but others will laugh and say ‘Half? Ha! I’m at quarter efficiency!”

Carmack’s departure comes at a time that Zuckerberg, Meta’s CEO, has been battling widespread perceptions that he has been wasting billions of dollars trying to establish the Menlo Park, California, company in the “metaverse” – an artificial world filled with avatars of real people.

It seems to me that John Cormack finally got frustrated enough with the metaverse project, and its apparent lack of progress, to decide to leave the company. I hope his own startup will be more efficient and interesting for him than the “metaverse” was.


Meta’s Oversight Board Criticizes ‘Cross Check’ Program For VIPs



Meta Platforms Inc. has long given unfair deference to VIP users of its Facebook and Instagram services under a program called “cross check” and has misled the public about the program, the company’s oversight board concluded in a report issued Tuesday, The Wall Street Journal reported.

According to The Wall Street Journal, the report offers the most detailed review of cross check, which Meta has billed as a quality-control effort to prevent moderation errors on content of heightened public interest. The oversight board took up the issue more than a year ago in the wake of a Wall Street Journal article based on the internal documents that showed that cross check was plagued by favoritism, mismanagement and understaffing.

Meta’s Oversight Board posted information titled: “Oversight Board publishes policy advisory opinion on Meta’s cross-check program”. From the information:

Key Findings: The Board recognizes that the volume and complexity of content posted on Facebook and Instagram pose challenges for building systems that uphold Meta’s human rights commitments. However, in its current form, cross-check is flawed in key areas which the company must address:

Unequal treatment of users. Cross-check grants certain users greater protection than others. If a post from a user on Meta’s cross-check lists is identified as violating the company’s rules, it remains on the platform pending further review. Meta then applies its full range of policies, including exceptions and context-specific provisions, to the post, likely increasing its chances of remaining on the platform.

Ordinary users, by contrast, are much less likely to have their content reach reviewers who can apply the full range of Meta’s rules. This unequal treatment is particularly concerning given the lack of transparent criteria for Meta’s cross-check lists. While there are clear criteria for including business partners and government leaders, users whose content is likely to be important from a human rights perspective, such as journalists and civil society organizations, have less clear paths to access the program.

Lack of transparency around how cross-check works. The Board is also concerned about the limited information Meta has provided to the public and its users about cross-check. Currently, Meta does not inform users that they are on cross check lists and does not publicly share its procedures for creating and auditing these lists. It is unclear, for example, whether entities that continuously post violating content are kept on cross-check lists based on their profile. This lack of transparency impedes the Board and the public from understanding the full consequences of the program.

NPR reported that the board said Meta appeared to be more concerned with avoiding “provoking” VIPs and evading accusations of censorship than balancing tricky questions of free speech and safety. It called for the overhaul of the “flawed” program in a report on Tuesday that included wide-ranging recommendations to bring the program in line with international principles and Meta’s own stated values.

Personally, I don’t think it is fair for Meta to pick and choose which users are exempt from Meta’s rules about what people can, and can not, post. Hopefully, the Oversight Board’s review will require Meta to treat all users equally.


Meta Releases Community Standards Enforcement



Earlier this year, Meta Platforms quietly convened a war room of staffers to address a critical problem: virtually all of Facebook’s top-ranked content was spammy, over sexualized, or generally what the company classified as regrettable, The Wall Street Journal reported.

Meta’s executives and researchers were growing embarrassed that its widely viewed content report, a quarterly survey of the posts with the broadest reach, was consistently dominated by stolen memes, engagement bait and link spam for sketchy online shops, according to documents viewed by The Wall Street Journal and people familiar with the issue.

Meta posted “Integrity and Transparency Reports, Third Quarter 2022”. It was written by Guy Rosen, VP of Integrity. Part of the report included Community Standards Enforcement Report Highlights.

It includes the following:

“Our actions against hate speech-related content decreased from 13.5 million to 10.6 million in Q3 2022 on Facebook because we improved the accuracy of our AI technology. We’ve done this by leveraging data from past user appeals to identify posts that could have been removed by mistake without appropriate cultural context.

“For example, now we can better recognize humorous terms of endearment used between friends, or better detect words that may be considered offensive or inappropriate in one context but not another. As we improve this accuracy, or proactive detection rate for hate speech also decreased from 95.6% to 90.2% in Q3 2022.”

Part of the report states that Meta’s actions against content that incites violence decreased from 19.3 million to 14.4 million in Q3 2022 after their improved AI technology was “better able to recognize language and emojis used in jest between friends.”

For bullying and harassment-related content, Meta’s proactive rate decreased in Q3 2022 from 76.7% to 67.8% on Facebook, and 87.4% to 87.3% on Instagram. Meta stated that this decrease was due to improved accuracy in their technologies (and a bug in their system that is now resolved).

On Facebook, Meta Took Action On:

1.67 million pieces of content related to terrorism, an increase from 13.5 million in Q2. This increase was because non-violating videos were added incorrectly to our media-matching technology banks and were removed (though they were eventually restored).

4.1 million pieces of drug content, an increase from 3.9 million in Q2 2022, due to improvements made to our proactive detection technology.

1.4 billion pieces of spam content, an increase from 734 million in Q2, due to an increased number of adversarial spam incidents in August.

On Instagram, Meta Took Action On:

2.2 million pieces of content related to terrorism, from 1.9 million on Q2, due to non-violating videos added incorrectly to our media-matching technology banks and were removed (though they were eventually restored).

2.5 pieces of drug content, an increase from 1.9 million, due to improvements in our proactive detection technology.

AdWeek reported that Meta removed three networks during the third quarter of this year for violations of Meta’s policies against inauthentic behavior.

According to AdWeek, the first originated in the U.S. and was linked to individuals associated with the U.S. military, and it operated across many internet services and focused on Afghanistan, Algeria, Iran, Iraq, Kazakhstan, Kyrgyzstan, Russia, Somalia, Syria, Tajikistan, Uzbekistan and Yemen.

The second one originated in China and targeted the Czech Republic, the U.S., and, to a lesser extent, Chinese- and French-speaking audiences around the world. The third originated in Russia, primarily targeting Germany but also France, Italy, Ukraine and the U.K.

It appears that Meta has put some effort into cleaning up its platforms. I suppose that’s what happens when Meta’s own researchers were “embarrassed” by what was appearing on Facebook and Instagram!


Meta Is Preparing To Notify Employees of Large-Scale Layoffs



Meta Platforms Inc. (parent company of Facebook) is planning to begin large-scale layoffs this week, according to people familiar with the matter, in what could be the largest round in a recent spate of tech job cuts after the industry’s rapid growth during the pandemic, The Wall Street Journal reported.

According to The Wall Street Journal, the layoffs are expected to affect many thousands of employees and an announcement is planned to come as soon as Wednesday, according to the people. Meta reported more than 87,000 employees at the end of September. Company officials already told employees to cancel nonessential travel beginning this week, the people said.

The Wall Street Journal also reported that the planned layoffs would be the first broad head-count reductions to occur in the company’s 18-year history. While smaller on a percentage basis than the cuts at Twitter Inc. this past week, which hit about half of that company’s staff, the number of Meta employees expected to lose their jobs could be the largest to date at a major technology corporation in a year that has seen a tech-industry retrenchment.

The New York Times reported that Meta plans to lay off employees this week, three people with knowledge of the situation said, adding that the job cuts were set to be the most significant at the company since it was founded in 2004.

According to The New York Times, it was unclear how many people would be cut and in which departments, said the people, who declined to be identified because they were not authorized to speak publicly. The layoffs were expected by the end of the week. Meta had 87,314 employees at the end of September, up 28 percent from a year ago.

Why the job cuts? The New York Times explained that Meta has been struggling financially for months and has been increasingly clamping down on costs. The Silicon Valley company, which owns Facebook, Instagram, What’s App and Messenger, has spent billions of dollars on the emerging technology of the metaverse, an immersive online world, just as the global economy has slowed and inflation has soared.

In addition, digital advertising – which forms the bulk of Meta’s revenue – has weakened as advertisers have pulled back, affecting many social media companies. Meta’s business has also been hurt by privacy changes that Apple enacted, which have hampered the ability of many apps to target mobile ads to users.

Are we looking at the end of the biggest social media giants? Massive layoffs are never a good sign for any company. It indicates that the company is losing money so quickly that it feels the need to fire a massive amount of its workforce. Meta pretty much did this to itself, by basing the majority of its income on the money it got from ads, which are less lucrative now since Apple’s changes. Twitter, on the other hand, is dealing with the chaos of Elon Musk’s choices.


The Wire Redacted Its Claims About Meta’s Instagram



Recently, The Wire and Meta appeared to be fighting with each other over whether or not Meta was removing posts from Instagram that were reported by a specific person. The Wire made more than one post about this. Meta countered with a post on its Newsroom titled “What The Wire Reports Got Wrong”.

Today, The Wire posted the following statement:

“Earlier this week, The Wire announced its decision to conduct an internal review of its recent coverage of Meta, especially the sources and materials involved in our reporting.

“Our investigation, which is ongoing, does not at yet allow us to take a conclusive view about the authenticity and bona fides of the sources with whom a member of our reporting team says he has been in touch with over an extended period of time. However, certain discrepancies have emerged in the material used. These include the inability of our investigators to authenticate both the email purportedly sent from a***** @fb . Com as well as the email purportedly received from Ujjwal Kumar, (an expert cited in the reporting as having endorsed one of the findings, but who has, in fact, categorically denied sending such an email.) As a result, The Wire believes it is appropriate to retract the stories.

“We are still reviewing the entire matter, including the possibility that it was deliberately sought to misinform or deceive The Wire.

“Lapses in editorial oversight are also being reviewed, as are editorial roles, so that failsafe protocols are put into place ensuring the accuracy of all source-based reporting.

“Given the discrepancies that have come to our attention via our review so far, The Wire will also conduct a thorough review of previous reporting done by the technical team involved in our Meta coverage, and remove the stories from public view until that process is complete…”

Previous to the redaction from The Wire, Meta posted about the situation in its Newsroom blog. Meta wrote the following:

“Two articles published by The Wire allege that a user whose account is cross-checked can influence decisions on Instagram without any review. Our cross-check program was built to prevent potential over-enforcement mistakes and to double-check cases where a decision could require more understanding or there could be a higher risk for a mistake. To be clear, our cross-check program does not grant enrolled accounts the power to automatically have content removed from our platform”.

Meta also wrote that the claims in The Wire’s article was “based on allegedly leaked screenshots from our internal tools. We believe this document is fabricated.” Meta also stated that The Wire’s second story cites emails from a Meta employee – and claimed that the screenshot included in the story has two emails. Meta said both were fake.

It is unclear who, exactly, fed misinformation to The Wire regarding Meta’s Instagram interactions. What is abundantly clear is that the person – or persons – appear to have fabricated what might be false claims. It is unfortunate that The Wire didn’t catch that before publication.


Meta And The Wire Are Fighting With Each Other



There appears to be a spat between Meta and The Wire over information that The Wire reported regarding Meta’s XCheck program. At a glance, it seems as though Meta has disagreements with things that The Wire posted regarding Meta’s XCheck program and its affect on Instagram.

The Wire posted an article titled: “If BJP’s Amit Malviya Reports Your Post, Instagram Will Take It Down – No Questions Asked” on October 10, 2022. In this article, The Wire reported that a specific satire account had some of their Instagram posts removed shortly after they were posted. According to The Wire, the posts that were removed were reported by Instagram user Amit Maviya, who is reportedly “President of Janata Party’s infamous IT Cell.”

On October 12, Meta responded with a post on its Newsroom titled: “What The Wire Reports Got Wrong”. Here is part of that post:

Two articles published by The Wire allege that a user whose account is cross-checked can influence decisions on Instagram without any review. Our cross-check system was built to prevent potential over-enforcement mistakes and to double-check cases where a decision could require more understanding or there could be a higher risk for a mistake. To be clear, our cross-check program does not grant enrolled accounts the power to automatically have content removed from our platform.

While it is legitimate for us to be held accountable for our content decisions, the allegations made by The Wire are false. They contain mischaracterizations of how our enforcement processes work, and rely on what we believe to be fabricated evidence in their reporting. Here is what they got wrong…

According to Meta, the first article from The Wire claims that a cross-check account has the power to remove content from Meta’s platform with no questions asked. Meta says this is false.

Meta claims the article was “based on allegedly leaked screenshots from our internal tools. We believe this document is fabricated”.

Meta states that they did not identify a user regarding the account mentioned in The Wire’s first article.

Meta says the second story cites emails from a Meta employee – and claims that the screenshot included in the story has two emails – both are fake

On October 15, The Wire tweeted a thread pushing back against Meta’s statements. In that thread, The Wire links to a new article that provides more information about why they believe that they reported things correctly.

On October 16, Meta added to its Newsroom post with the following:

This is an ongoing investigation and we will update as it unfolds. At this time, we can confirm that the video shared by The Wire that purports to show an internal Instagram system (and which The Wire claims is evidence that their false allegations are true) in fact depicts an externally-created Meta Workplace account that was deliberately set up with Instagram’s name and brand insignia in order to deceive people.

According to Meta, that Workplace account was set up as a free trial account on Meta’s enterprise Workplace product under the name “Instagram” and using the Instagram brand as its profile picture. It is not an internal account. Meta also claims that the account was created on October 13, and Meta believes it was set up to manufacture evidence to support The Wire’s reporting (which Meta called “inaccurate”).

Personally, I don’t really care who is right or who is wrong, mostly because I don’t have the time to sort through everything. I’ll leave you to decide that for yourself. That said, something in this fight between Meta and The Wire seems fishy to me… but I can’t pinpoint it.


Meta’s Legs Demonstration Video Was Misleading



Earlier this week Meta CEO Mark Zuckerberg took to the stage to demonstrate that, having spent billions of dollars to create a virtual reality universe (Horizon Worlds) that looked like it was from 2004, his company was working on improving that universe to make it look like it was from 2009 instead. Integral to this upgrade was the fact that avatars would no longer be mere floating torsos, but would soon have legs, Kotaku reported.

Luke Plunkett (at Kotaku) included a piece from Ethan Gach’s previous article titled: “Everyone Cheers As Mark Zuckerberg Reveals Feet.”

Today’s model is clearly an extension of that early rendering, and finally brings the VR platform past the likes of Fire Emblem: Awakening on the Nintendo 3DS, another game that lacked legs. And that was with Meta only spending $10 billion this year on the technology. Who knows what another small fortune will bring? If anything, can catapult the Oculus storefront into the green, it’s a burgeoning market for VR feet pics. It might seem like we’re being ridiculous here, but do know that the live chat alongside the virtual audience watching all of this unfold absolutely exploded when Zuckerberg started talking about feet.

The Verge reported that during Meta’s Connect conference on Tuesday, Mark Zuckerberg made a huge announcement: the avatars in the company’s Horizon VR app will be getting legs soon. To demonstrate this groundbreaking technical achievement, Zuckerberg’s digital avatar lifted each leg in the air, then did a jump, while Aigerim Shorman’s avatar kicked into the air.

It turns out that the demonstration by Zuckerberg and Shorman was somewhat misleading. Ian Hamilton (VR Journalist, UploadVR Editor) tweeted: “For those who’ve been wondering about the legs shown in the Connect keynote … Meta: “To enable this preview of what’s to come, the segment featured animations created from motion capture.”

On October 11, Meta mentioned legs on post about Meta Connect 2022. “It may sound like we’re just flipping a switch behind the scenes, but this took a lot of work to make it happen. When your digital body renders incorrectly – in the wrong spot, for instance, it can be distracting or even disturbing, and take you out of the experience immediately. And legs are hard! If your legs are under a desk or even just behind your arms, then the headset can’t see them properly and needs to rely on prediction”.

Meta continued: “We spent a long time making sure Meta Quest 2 could accurately – and reliably – bring your legs into VR. Legs will roll out to World’s first, so we can see how it goes. Then we’ll begin bringing legs into more experiences over time as our technology improves.”

All of this sounds incredibly strange to me. The image at the top of this blog post comes from Meta’s blog. It gives me the impression that the legs that will eventually come to Horizon Worlds could be static and unable to allow an avatar to jump or kick into the air. Some people are going to be very disappointed.