Tag Archives: Facebook

John Carmack Is Leaving Meta



John Carmack, a pioneer of virtual reality technology, is leaving Meta after more than eight years at the company, according to an internal post reviewed by The New York Times.

The New York Times reported that in the post, which was written by Mr. Carmack, the technologist criticized his employer. He said Meta, which is in the midst of transitioning from a social networking company to one focused on the immersive world of the metaverse, was operating at “half the effectiveness” and has “a ridiculous amount of people and resources, but we constantly self-sabotage and squander the effort.”

“It’s been a struggle for me,” Mr. Carmack wrote in the post, which published on an internal forum this week. “I have a voice at the highest levels here, so it feels like I should be able to move things, but I’m evidently not persuasive enough.”

Mr. Carmack was the former chief technology officer of Oculus, the virtual reality company that Facebook bought for $2 billion in 2014. Mr. Carmack was one of the most influential voices leading the development of V.R. headsets. He stayed with Facebook after Mark Zuckerberg, the chief executive, decided to shift the company last year to focus on the meta verse and renamed Facebook as Meta.

According to The New York Times, Mr. Carmak’s post, which said he was ending his decade in V.R., concluded by saying he had “wearied of the fight” and would focus on his own start-up. (He announced in August that his artificial intelligence firm Keen Technologies, had raised $20 million.)

This week, Mr. Carmack testified in a court hearing over the Federal Trade Commission’s attempt to block Meta’s purchase of Within, the virtual reality start-up behind a fitness game called Supernatural. The agency has argued that the tech giant will snuff out competition in the nascent meta verse if it is allowed to complete the deal.

CNN reported that Carmack was celebrated for his work developing Wolfenstein 3D, Quake and Doom, and co-founded video game company id Software. He was an early advocate for virtual reality, though it was not uncommon for him to criticize Meta.

When asked for comment by CNN, Meta pointed to Carmack’s post and a tweet from CTO Andrew Bosworth.

“It is impossible to overstate the impact you’ve had on our work and the industry as a whole,” Bosworth tweeted. “Your technical prowess is widely known, but it is your relentless focus on creating value for people that we will remember most. Thank you and see you in VR.”

CBS News reported that John Carmack cut his ties with Meta Platforms, a holding company created last year by Facebook founder Mark Zuckerberg, in a Friday letter that vented his frustration as he stepped down as an executive consultant in virtual reality.

“There is no way to sugar coat this; I think our organization is operating at half the effectiveness that would make me happy,” Carmak wrote in the letter, which he shared on Facebook. “Some may scoff and contend we are doing just fine, but others will laugh and say ‘Half? Ha! I’m at quarter efficiency!”

Carmack’s departure comes at a time that Zuckerberg, Meta’s CEO, has been battling widespread perceptions that he has been wasting billions of dollars trying to establish the Menlo Park, California, company in the “metaverse” – an artificial world filled with avatars of real people.

It seems to me that John Cormack finally got frustrated enough with the metaverse project, and its apparent lack of progress, to decide to leave the company. I hope his own startup will be more efficient and interesting for him than the “metaverse” was.


Meta’s Oversight Board Criticizes ‘Cross Check’ Program For VIPs



Meta Platforms Inc. has long given unfair deference to VIP users of its Facebook and Instagram services under a program called “cross check” and has misled the public about the program, the company’s oversight board concluded in a report issued Tuesday, The Wall Street Journal reported.

According to The Wall Street Journal, the report offers the most detailed review of cross check, which Meta has billed as a quality-control effort to prevent moderation errors on content of heightened public interest. The oversight board took up the issue more than a year ago in the wake of a Wall Street Journal article based on the internal documents that showed that cross check was plagued by favoritism, mismanagement and understaffing.

Meta’s Oversight Board posted information titled: “Oversight Board publishes policy advisory opinion on Meta’s cross-check program”. From the information:

Key Findings: The Board recognizes that the volume and complexity of content posted on Facebook and Instagram pose challenges for building systems that uphold Meta’s human rights commitments. However, in its current form, cross-check is flawed in key areas which the company must address:

Unequal treatment of users. Cross-check grants certain users greater protection than others. If a post from a user on Meta’s cross-check lists is identified as violating the company’s rules, it remains on the platform pending further review. Meta then applies its full range of policies, including exceptions and context-specific provisions, to the post, likely increasing its chances of remaining on the platform.

Ordinary users, by contrast, are much less likely to have their content reach reviewers who can apply the full range of Meta’s rules. This unequal treatment is particularly concerning given the lack of transparent criteria for Meta’s cross-check lists. While there are clear criteria for including business partners and government leaders, users whose content is likely to be important from a human rights perspective, such as journalists and civil society organizations, have less clear paths to access the program.

Lack of transparency around how cross-check works. The Board is also concerned about the limited information Meta has provided to the public and its users about cross-check. Currently, Meta does not inform users that they are on cross check lists and does not publicly share its procedures for creating and auditing these lists. It is unclear, for example, whether entities that continuously post violating content are kept on cross-check lists based on their profile. This lack of transparency impedes the Board and the public from understanding the full consequences of the program.

NPR reported that the board said Meta appeared to be more concerned with avoiding “provoking” VIPs and evading accusations of censorship than balancing tricky questions of free speech and safety. It called for the overhaul of the “flawed” program in a report on Tuesday that included wide-ranging recommendations to bring the program in line with international principles and Meta’s own stated values.

Personally, I don’t think it is fair for Meta to pick and choose which users are exempt from Meta’s rules about what people can, and can not, post. Hopefully, the Oversight Board’s review will require Meta to treat all users equally.


Meta Releases Community Standards Enforcement



Earlier this year, Meta Platforms quietly convened a war room of staffers to address a critical problem: virtually all of Facebook’s top-ranked content was spammy, over sexualized, or generally what the company classified as regrettable, The Wall Street Journal reported.

Meta’s executives and researchers were growing embarrassed that its widely viewed content report, a quarterly survey of the posts with the broadest reach, was consistently dominated by stolen memes, engagement bait and link spam for sketchy online shops, according to documents viewed by The Wall Street Journal and people familiar with the issue.

Meta posted “Integrity and Transparency Reports, Third Quarter 2022”. It was written by Guy Rosen, VP of Integrity. Part of the report included Community Standards Enforcement Report Highlights.

It includes the following:

“Our actions against hate speech-related content decreased from 13.5 million to 10.6 million in Q3 2022 on Facebook because we improved the accuracy of our AI technology. We’ve done this by leveraging data from past user appeals to identify posts that could have been removed by mistake without appropriate cultural context.

“For example, now we can better recognize humorous terms of endearment used between friends, or better detect words that may be considered offensive or inappropriate in one context but not another. As we improve this accuracy, or proactive detection rate for hate speech also decreased from 95.6% to 90.2% in Q3 2022.”

Part of the report states that Meta’s actions against content that incites violence decreased from 19.3 million to 14.4 million in Q3 2022 after their improved AI technology was “better able to recognize language and emojis used in jest between friends.”

For bullying and harassment-related content, Meta’s proactive rate decreased in Q3 2022 from 76.7% to 67.8% on Facebook, and 87.4% to 87.3% on Instagram. Meta stated that this decrease was due to improved accuracy in their technologies (and a bug in their system that is now resolved).

On Facebook, Meta Took Action On:

1.67 million pieces of content related to terrorism, an increase from 13.5 million in Q2. This increase was because non-violating videos were added incorrectly to our media-matching technology banks and were removed (though they were eventually restored).

4.1 million pieces of drug content, an increase from 3.9 million in Q2 2022, due to improvements made to our proactive detection technology.

1.4 billion pieces of spam content, an increase from 734 million in Q2, due to an increased number of adversarial spam incidents in August.

On Instagram, Meta Took Action On:

2.2 million pieces of content related to terrorism, from 1.9 million on Q2, due to non-violating videos added incorrectly to our media-matching technology banks and were removed (though they were eventually restored).

2.5 pieces of drug content, an increase from 1.9 million, due to improvements in our proactive detection technology.

AdWeek reported that Meta removed three networks during the third quarter of this year for violations of Meta’s policies against inauthentic behavior.

According to AdWeek, the first originated in the U.S. and was linked to individuals associated with the U.S. military, and it operated across many internet services and focused on Afghanistan, Algeria, Iran, Iraq, Kazakhstan, Kyrgyzstan, Russia, Somalia, Syria, Tajikistan, Uzbekistan and Yemen.

The second one originated in China and targeted the Czech Republic, the U.S., and, to a lesser extent, Chinese- and French-speaking audiences around the world. The third originated in Russia, primarily targeting Germany but also France, Italy, Ukraine and the U.K.

It appears that Meta has put some effort into cleaning up its platforms. I suppose that’s what happens when Meta’s own researchers were “embarrassed” by what was appearing on Facebook and Instagram!


Facebook To Remove Topics From User’s Profiles



Facebook quietly announced it will remove several categories of information from user profiles, including religious views, political views, addresses and the “Interested in” field, which indicates sexual preference. The change goes into effect on December 1, Gizmodo reported.

“As part of our efforts to make Facebook easier to navigate and use, we’re removing a handful of profile fields: Interested In, Religious Views, Political Views, and Address,” said Emil Vazquez, a Meta spokesperson. “We’re sending notifications to people who have these fields filled out, letting them know these fields will be removed. This change doesn’t effect anyone’s ability to share this information about themselves elsewhere on Facebook.”

According to Gizmodo, the shift reflects Meta’s broader public relations efforts. As a whole, the tech industry wants the public to differentiate between “sensitive” data and what you might call “regular” data. Meta will tell you that Instagram and Facebook don’t use sensitive data for advertising, for example, though that change only came after researchers uncovered serious problems.

Gizmodo also reported: Facebook earned a poor reputation, not just for causing societal problems but because it’s just not cool anymore. Users have been leaving the platform in droves, and even Instagram, Facebook’s younger and slightly hipper sibling, has seen its cache decline.

The company is in dire financial straits as a result, Gizmodo reported. It laid off 11,000 employees just last week. CEO Mark Zuckerberg shifted the entire future of the company, moving away from social media and towards a moonshot goal of building a mixture of virtual and augmented reality he calls “the metaverse”. But in the meantime, Facebook and Instagram are still Meta’s only source of income.

TechCrunch reported that Facebook’s change was first spotted by social media consultant Matt Navarra, who tweeted a screenshot of the notice being sent to users who have these fields filled out. The notice indicates that users’ other information will remain on their profiles along with the rest of their contact and basic information.

According to TechCrunch, Facebook’s decision to get rid of these specific profile fields is part of its efforts to streamline its platform, which currently consists of several features that are somewhat outdated.

It’s worth noting that the information fields that Facebook is choosing to remove are ones that other major social networks don’t offer. Platforms like Instagram and TikTok have simple bios that let users share a little bit about themselves without going into specific details, such as political or religious views.

Engadget reported that other details that you provide Facebook, such as your contact information and relationship status, will persist. You can download a copy of your Facebook data before December 1st if you’re determined to preserve it, and you still have control over who can see the remaining profile content.

I’m seeing what might be a pattern. Facebook is removing information from the profile’s of its users, making it harder for users to have an easy way to self-identify. Twitter is losing employees by the hundreds, which I assume would make it harder for the company to implement new features or enforce its terms of service. Could this be the end of social media as we know it?


Meta Warns 1M Facebook Users Their Login Info Might Be Compromised



The Washington Post reported that Facebook parent Meta is warning 1 million users that their login information may have been compromised through malicious apps.

According to The Washington Post, Meta’s researchers found more than 400 malicious Android and Apple iOS apps this year that were designed to steal the personal Facebook login information of its users, the company said Friday in blog post. Meta spokesperson Gabby Curtis confirmed that Meta is warning 1 million users who may have been affected by the apps.

Meta said the apps they identified were listed in Apple’s App Store and Google Play Store as games, photo editors, health and safety lifestyle services and other types of apps to trick people into downloading them. Often the malicious app would ask users to “login with Facebook” and later steal their username and password, according to the company.

Meta posted information titled “Protecting People From Malicious Account Compromise Apps” in Meta’s Newsroom. Here is some of what Meta found:

Our security researchers have found more than 400 malicious Android and iOS apps this year that were designed to steal Facebook login information and compromise people’s accounts. These apps were listed on the Google Play Store and Apple’s App Store and disguised as photo editors, games, VPN services, business apps, and other utilities to trick people into downloading them. Some examples include:

  • Photo editors, including those that claim to allow you to “turn yourself into a cartoon”
  • VPNs claiming to boost browsing speed or grant access to blocked content or websites
  • Mobile games falsely promising high-quality 3D graphics
  • Health and lifestyle apps such as horoscopes and fitness trackers
  • Business or ad management apps claiming to provide hidden or unauthorized features not found in official apps by tech platforms.

Meta’s post included a pie chart that shows the categories of the malicious apps. 42.6% were photo editor apps, 15.4% were business utility apps. 14.1% were phone utility apps, 11.7% were game apps, 11.7% were VPN apps, and 4.4% were lifestyle apps.

Meta also stated that malware apps often have telltale signs that differentiate them from legitimate apps. Here are a few things to consider before logging into a mobile app with your Facebook account:

Requiring social media credentials to use the app. Is the app unusable if you don’t provide your Facebook information? For example, be suspicious of a photo-editing app that needs your Facebook login and password before allowing you to use it.

The app’s reputation. Is the app reputable? Look at its download count, ratings and reviews, including the negative ones.

Promised features. Does the app provide the functionality it says it will, either before or after logging in?

I stopped using Facebook a long time ago. Back then, the worst thing that could happen to a person who played games on Facebook was that their strawberries would rot before they could tend them in FarmVille. I cannot help but wonder if the simplicity of the Zynga games that were on early Facebook made people presume that all apps on Facebook were safe.


Facebook Marketplace And DoorDash Team Up



Drivers for DoorDash Inc., are delivering items that consumers purchase from Facebook Marketplace as part of a new partnership between the delivery app and Meta Platforms, Inc., The Wall Street Journal reported.

According to The Wall Street Journal, the deal is an attempt to get more people, especially younger ones, to use Meta-owned Facebook, according to a person familiar with the plan. For DoorDash, the partnership boosts its ambition to expand into delivering more than food.

The service lets Facebook users purchase and receive items from Marketplace without leaving their homes. It can deliver items that fit in a car trunk and are up to 15 miles away, people familiar with the plan said. Deliveries would be made within 48 hours, they said.

For Meta, the idea behind the partnership is to try and get young people to use Marketplace more often, according to the person familiar with its plans. Marketplace is a feature within Facebook that lets people sell new and used goods to one another.

The Guardian reported, in 2021, that Apple’s iOS 14.5 update included the App Tracking Transparency feature. As you may recall, the update included a setting that requires applications to ask users’ for consent before they are able to track their activity across other apps and websites. If users decline, then applications will not be able to access they unique user ID that they need to follow individuals as they live their digital lives.

The Wall Street Journal reported: Over the past two years, Meta has increased its efforts to expand e-commerce on its social-media apps because it would help it sell ads. The company lost ad revenue over the past year after Apple Inc. changed its privacy for iPhones and iPads. The changes, according to The Wall Street Journal, made it easier for people to stop apps from tracking their devices.

Personally, I think that it is good for consumers to be enabled to choose whether or not they want a specific app to track them. It is my understanding that the vast majority of Apple users opted-out of being tracked. In my opinion, it is unhealthy to base your company’s revenue entirely on the hopes that consumers will decide to allow you to track them across the internet.

In the two years since that happened, it appears that Meta is now hoping that the younger demographic it is trying to attract have forgotten about Facebook’s full-page add that appeared in The New York Times, The Wall Street Journal, and the Washington Post. In short, Facebook paid for the ads in an effort to spread misinformation about Apple’s App Tracking Transparency and Apple’s “nutrition label” that shows exactly what each app wants to track.

The Verge reported that it is still not exactly clear how many Marketplace users currently have the option for DoorDash deliveries, or how much it costs. It remains to be seen how many younger consumers want to order delivery through the Facebook and DoorDash partnership when they could just order food from DoorDash themselves.


Social Media Companies Killed A California Bill To Protect Kids



California lawmakers killed a bill Thursday that would have allowed government lawyers to sue social-media companies for features that allegedly harm children by causing them to become addicted, The Wall Street Journal reported.

According to The Wall Street Journal, the measure would have given the attorney general, local district attorneys and city attorneys in the biggest California cities authority to try to hold social-media companies liable in court for features that knew or should have known could addict minors. Among those targeted could have been Facebook and Instagram parent Meta Platforms, Inc., Snapchat parent Snap Inc., and TikTok, owned by Chinese company ByteDance Ltd.

In June of 2022, Meta (parent company of Facebook and Instagram) was facing eight lawsuits filed in courthouses across the US that allege that excessive exposure to platforms including Facebook and Instagram has led to attempted or actual suicides, eating disorders and sleeplessness, among other issues. More specifically, the lawsuits claim that the company built algorithms into its platforms that lure young people into destructive behavior.

The Wall Street Journal also reported that the bill died in the appropriations committee of the California state senate through a process known as the suspense file, in which lawmakers can halt the progress of dozens or even hundreds of potentially controversial bills without a public vote, based on their possible fiscal impact.

The death of the bill comes after social media companies worked aggressively to stop the bill, arguing that it would lead to hundreds of millions of dollars in liability and potentially prompt them to abandon the youth market nationwide. Meta, Twitter Inc., and Snap all had individually lobbied against the measure according to state lobbying disclosures.

This doesn’t mean that a similar bill cannot be passed by the federal government. Politico reported earlier this month that the Commerce Committee advanced the floor considerations for two bills: It approved the Children and Teens’ Online Privacy Protection Act on a voice vote and the Kids Online Safety Act by a unanimous 28-0.

According to Politico, The Kids Online Safety Act was co-sponsored by Richard Blumenthal (Democrat – Connecticut) and Marsha Blackburn (Republican – Tennessee). That bill, if passed, would require social media platforms to allow kids and their parents to opt out of content algorithms that have fed them harmful content and disable addictive product features.

The Children and Teens’ Online Privacy Protection Act was sponsored by Bill Cassidy (Republican – Louisiana) and Ed Markey (Democrat – Massachusetts). That bill, if passed, would extend existing privacy protections for preteens to children up to age 16 and bans ads from targeting them. It would also give kids and their parents the right to delete information that online platforms have about them.

Personally, I think that parents of children and teenagers who have allowed their kids to use social media should have complete control over preventing the social media companies from gathering data on their children. Huge social media companies need to find other ways of sustaining revenue that doesn’t involved mining underage people in the hopes of gaining money from ads.