Tag Archives: Instagram

Meta Is Testing Meta Verified – A Paid Subscription Service



Meta announced they are testing Meta Verified, a new subscription bundle that includes account verification with impersonation protections and access to increased visibility and support. Personally, I think this decision was influenced by Twittter’s Twitter Blue subscription service.

The subscription is available for direct purchase on Instagram or Facebook in Australia and New Zealand starting later this week. People can purchase a monthly subscription (USD) $11.99 on the web and (USD) $14.99 on iOS and Android.

With Meta Verified, you’ll get:

  • A verified badge, confirming you’re the real you and that your account has been authenticated by a government ID.
  • More protection from impersonation with proactive account monitoring for impersonators who might target people with growing online audiences.
  • Help when you need it with access to a real person for common account issues.
  • Increased visibility and reach with prominence in some areas of the platform – like search, comments and recommendations.
  • Exclusive features to express yourself in unique ways.

Meta stated: Its important to feel confident that your identity and accounts are safe and that the people you’re interacting with are who they say they are. That’s why we’re building a series of checks into Meta Verified before, during, and after someone applies.

  • To be eligible, accounts must meet minimum activity requirements, such as prior posting history and be at least 18 years old.
  • Applicants are then required to submit a government ID that matches the profile name and photo of the Facebook or Instagram account they’re applying for.
  • Subscriptions will include proactive monitoring for account impersonation.

TechCrunch reported that Facebook-parent Meta has launched a subscription service called Meta Verified that will allow users to add the coveted blue check mark to their Instagram and Facebook accounts for up to a $15 a month by verifying their identity, its chief executive Mark Zuckerberg said on Sunday, tapping a revenue channel that has returned mixed success for its smaller rival Twitter.

According to TechCrunch, the revenues of Meta, which has opted not to charge its customers for most of its services in more than a decade and a half since its founding, have taken a hit in recent years following Apple’s decision to introduce stringent privacy change on iOS that curtails the social firm’s ability to track users’ internet activities.

The Zuckerberg-led firm, which makes nearly all of its money from advertising, said last year that Apple’s move would cost the company more than $10 billion in lost ads revenue in 2022.

TechCrunch also reported that Meta’s announcement follows Snap launching its own subscription service last year, through which it has converted over a million users into paid customers already.

I have an Instagram account, but I don’t use Facebook. The idea of giving Meta access to a government ID that matches the profile name on my Instagram is alarming. I don’t feel that giving that type of information to Meta is safe, especially since the company has a history of tracking people.


Meta Introduces Instagram Broadcast Channels



Mark Zuckerberg introduced broadcast channels on Instagram with his own “Meta Channel”. Broadcast channels are a public one-to-many messaging tool that creators can invite all of their followers into and share text, video and photo updates.

Meta stated that creators can also use voice notes to share their latest updates and behind-the-scenes moments, and even create polls to crowdsource fan feedback. Only creators can send messages in broadcast channels, while followers can react to content and vote in polls.

According to Meta, more features will be added to broadcast channels in the coming months, like the ability to bring another creator into the channel to discuss upcoming collars, crowdsource questions for an “ask me anything” and more.

How Do Broadcast Channels Work?

Once a creator gets access to broadcast channels and sends the first message from their Instagram inbox, their followers will receive a one-time notification to join the channel. Anyone can discover the broadcast channel and view the content, but only followers who join the channel will receive notifications whenever there are updates.

Followers can leave or mute broadcast channels at any time and can also control their notifications from creators by going to a creator’s profile, tapping the bell icon and selecting “broadcast channel”.

Notifications will default to “some,” but this setting can be changed to “all” or “none.” Other than the invitation notification, followers will not get any other notifications about a broadcast channel unless they add the channel to their inbox. Once a channel is added to their inbox, it will appear among other message threads, and notifications will be turned on and function like any other chat.

When a broadcast channel is live, creators can also encourage their followers to join by using the “join channel” sticker in Stories or by pinning the channel link to their profile (coming soon).

TechCrunch reported that the new feature gives creators a new way to update their followers within the app. In the past, creators have usually posted a story to share news and updates with their followers, but they now have the option to use a more direct way to engage with their fans. The feature also lets creators get feedback on certain things and promote their content.

According to TechCrunch, Meta is debuting channels on Instagram first, the company plans to bring the feature to Messenger and Facebook in the coming months (according to Zuckerberg).

Gizmodo reported that Mark Zuckerberg said, “We’re starting to roll out Instagram channels – a new broadcast chat feature.” Zuckerberg continued, “I’m starting a channel to share news and updates on all the products and tech we’re building at Meta. It will be the place where I share Meta product news first.”

In my opinion, it sounds like Instagram Broadcasts could be interesting to people who use Instagram – especially if it provides an alternative to Instagram Stories. It remains to be seen if the feature will be as popular on Facebook or Messenger.


Instagram Provided Ways To Keep Your Account Safe



Instagram (which is owned by parent company Meta) announced that they are committed to fostering a safe and supportive community for everyone who uses Instagram. There are some easy things you can do to help keep your account safe, like making sure you have a strong password and enabling two-factor authentication.

Instagram has highlighted several new features designed to help keep people’s accounts safe, and offer them support if they lose account access.

Additional Account Support

To support accounts that are experiencing access issues or may have been hacked, Instagram created instagram.com/hacked – a new, comprehensive destination people can rely on to report and resolve account access issues.

If you are unable to log in to your account, enter instagram.com/hacked on your mobile phone or desktop browser. Next, you will be able to select if you think you’ve been hacked, forgot your password, lost access to two-factor authentication or if your account has been disabled. From there, you will be able to follow a series of steps to help regain your account.

Earlier this year, Instagram started testing a way for people to ask their friends to confirm their identity in order to regain access to their account, and this option is now available to everyone on Instagram. If you find yourself locked out of your account, you will be able to choose two of your Instagram friends to verify your identity and get back into your account.

Keeping Your Account Secure

Instagram is testing ways to help prevent hacking on Instagram before it happens. First, they remove accounts that their automated systems find to be malicious, including ones that impersonate others, which goes against our Community Guidelines. Second, because bad actors often don’t immediately use accounts maliciously, we’re now testing sending warnings if an account that we suspect may be impersonating someone requests to follow you.

Engadget reported that Instagram created a hub where people can go to report and resolve account access issues they’re having. Engadget noted that this could be hugely beneficial for hacked users who are struggling to regain access to their accounts.

In addition, Engadget reported that if you get locked out of an account, you can get two Instagram friends to verify your identity. This feature was tested out earlier this year and is now available to everyone. The two friends that you select to help verify you will have 24 hours to respond to the request. If they do, Instagram will let you reset your password.

In my opinion, these changes made by Instagram are a step in the right direction. According to The Verge, Instagram users whose Instagram accounts were stolen by hackers had to pay a ransom in order to get it back. Some had to turn to other hackers for help. It is good that Instagram is doing something to prevent that problem from happening.


Meta’s Oversight Board Criticizes ‘Cross Check’ Program For VIPs



Meta Platforms Inc. has long given unfair deference to VIP users of its Facebook and Instagram services under a program called “cross check” and has misled the public about the program, the company’s oversight board concluded in a report issued Tuesday, The Wall Street Journal reported.

According to The Wall Street Journal, the report offers the most detailed review of cross check, which Meta has billed as a quality-control effort to prevent moderation errors on content of heightened public interest. The oversight board took up the issue more than a year ago in the wake of a Wall Street Journal article based on the internal documents that showed that cross check was plagued by favoritism, mismanagement and understaffing.

Meta’s Oversight Board posted information titled: “Oversight Board publishes policy advisory opinion on Meta’s cross-check program”. From the information:

Key Findings: The Board recognizes that the volume and complexity of content posted on Facebook and Instagram pose challenges for building systems that uphold Meta’s human rights commitments. However, in its current form, cross-check is flawed in key areas which the company must address:

Unequal treatment of users. Cross-check grants certain users greater protection than others. If a post from a user on Meta’s cross-check lists is identified as violating the company’s rules, it remains on the platform pending further review. Meta then applies its full range of policies, including exceptions and context-specific provisions, to the post, likely increasing its chances of remaining on the platform.

Ordinary users, by contrast, are much less likely to have their content reach reviewers who can apply the full range of Meta’s rules. This unequal treatment is particularly concerning given the lack of transparent criteria for Meta’s cross-check lists. While there are clear criteria for including business partners and government leaders, users whose content is likely to be important from a human rights perspective, such as journalists and civil society organizations, have less clear paths to access the program.

Lack of transparency around how cross-check works. The Board is also concerned about the limited information Meta has provided to the public and its users about cross-check. Currently, Meta does not inform users that they are on cross check lists and does not publicly share its procedures for creating and auditing these lists. It is unclear, for example, whether entities that continuously post violating content are kept on cross-check lists based on their profile. This lack of transparency impedes the Board and the public from understanding the full consequences of the program.

NPR reported that the board said Meta appeared to be more concerned with avoiding “provoking” VIPs and evading accusations of censorship than balancing tricky questions of free speech and safety. It called for the overhaul of the “flawed” program in a report on Tuesday that included wide-ranging recommendations to bring the program in line with international principles and Meta’s own stated values.

Personally, I don’t think it is fair for Meta to pick and choose which users are exempt from Meta’s rules about what people can, and can not, post. Hopefully, the Oversight Board’s review will require Meta to treat all users equally.


Meta Releases Community Standards Enforcement



Earlier this year, Meta Platforms quietly convened a war room of staffers to address a critical problem: virtually all of Facebook’s top-ranked content was spammy, over sexualized, or generally what the company classified as regrettable, The Wall Street Journal reported.

Meta’s executives and researchers were growing embarrassed that its widely viewed content report, a quarterly survey of the posts with the broadest reach, was consistently dominated by stolen memes, engagement bait and link spam for sketchy online shops, according to documents viewed by The Wall Street Journal and people familiar with the issue.

Meta posted “Integrity and Transparency Reports, Third Quarter 2022”. It was written by Guy Rosen, VP of Integrity. Part of the report included Community Standards Enforcement Report Highlights.

It includes the following:

“Our actions against hate speech-related content decreased from 13.5 million to 10.6 million in Q3 2022 on Facebook because we improved the accuracy of our AI technology. We’ve done this by leveraging data from past user appeals to identify posts that could have been removed by mistake without appropriate cultural context.

“For example, now we can better recognize humorous terms of endearment used between friends, or better detect words that may be considered offensive or inappropriate in one context but not another. As we improve this accuracy, or proactive detection rate for hate speech also decreased from 95.6% to 90.2% in Q3 2022.”

Part of the report states that Meta’s actions against content that incites violence decreased from 19.3 million to 14.4 million in Q3 2022 after their improved AI technology was “better able to recognize language and emojis used in jest between friends.”

For bullying and harassment-related content, Meta’s proactive rate decreased in Q3 2022 from 76.7% to 67.8% on Facebook, and 87.4% to 87.3% on Instagram. Meta stated that this decrease was due to improved accuracy in their technologies (and a bug in their system that is now resolved).

On Facebook, Meta Took Action On:

1.67 million pieces of content related to terrorism, an increase from 13.5 million in Q2. This increase was because non-violating videos were added incorrectly to our media-matching technology banks and were removed (though they were eventually restored).

4.1 million pieces of drug content, an increase from 3.9 million in Q2 2022, due to improvements made to our proactive detection technology.

1.4 billion pieces of spam content, an increase from 734 million in Q2, due to an increased number of adversarial spam incidents in August.

On Instagram, Meta Took Action On:

2.2 million pieces of content related to terrorism, from 1.9 million on Q2, due to non-violating videos added incorrectly to our media-matching technology banks and were removed (though they were eventually restored).

2.5 pieces of drug content, an increase from 1.9 million, due to improvements in our proactive detection technology.

AdWeek reported that Meta removed three networks during the third quarter of this year for violations of Meta’s policies against inauthentic behavior.

According to AdWeek, the first originated in the U.S. and was linked to individuals associated with the U.S. military, and it operated across many internet services and focused on Afghanistan, Algeria, Iran, Iraq, Kazakhstan, Kyrgyzstan, Russia, Somalia, Syria, Tajikistan, Uzbekistan and Yemen.

The second one originated in China and targeted the Czech Republic, the U.S., and, to a lesser extent, Chinese- and French-speaking audiences around the world. The third originated in Russia, primarily targeting Germany but also France, Italy, Ukraine and the U.K.

It appears that Meta has put some effort into cleaning up its platforms. I suppose that’s what happens when Meta’s own researchers were “embarrassed” by what was appearing on Facebook and Instagram!


The Wire Redacted Its Claims About Meta’s Instagram



Recently, The Wire and Meta appeared to be fighting with each other over whether or not Meta was removing posts from Instagram that were reported by a specific person. The Wire made more than one post about this. Meta countered with a post on its Newsroom titled “What The Wire Reports Got Wrong”.

Today, The Wire posted the following statement:

“Earlier this week, The Wire announced its decision to conduct an internal review of its recent coverage of Meta, especially the sources and materials involved in our reporting.

“Our investigation, which is ongoing, does not at yet allow us to take a conclusive view about the authenticity and bona fides of the sources with whom a member of our reporting team says he has been in touch with over an extended period of time. However, certain discrepancies have emerged in the material used. These include the inability of our investigators to authenticate both the email purportedly sent from a***** @fb . Com as well as the email purportedly received from Ujjwal Kumar, (an expert cited in the reporting as having endorsed one of the findings, but who has, in fact, categorically denied sending such an email.) As a result, The Wire believes it is appropriate to retract the stories.

“We are still reviewing the entire matter, including the possibility that it was deliberately sought to misinform or deceive The Wire.

“Lapses in editorial oversight are also being reviewed, as are editorial roles, so that failsafe protocols are put into place ensuring the accuracy of all source-based reporting.

“Given the discrepancies that have come to our attention via our review so far, The Wire will also conduct a thorough review of previous reporting done by the technical team involved in our Meta coverage, and remove the stories from public view until that process is complete…”

Previous to the redaction from The Wire, Meta posted about the situation in its Newsroom blog. Meta wrote the following:

“Two articles published by The Wire allege that a user whose account is cross-checked can influence decisions on Instagram without any review. Our cross-check program was built to prevent potential over-enforcement mistakes and to double-check cases where a decision could require more understanding or there could be a higher risk for a mistake. To be clear, our cross-check program does not grant enrolled accounts the power to automatically have content removed from our platform”.

Meta also wrote that the claims in The Wire’s article was “based on allegedly leaked screenshots from our internal tools. We believe this document is fabricated.” Meta also stated that The Wire’s second story cites emails from a Meta employee – and claimed that the screenshot included in the story has two emails. Meta said both were fake.

It is unclear who, exactly, fed misinformation to The Wire regarding Meta’s Instagram interactions. What is abundantly clear is that the person – or persons – appear to have fabricated what might be false claims. It is unfortunate that The Wire didn’t catch that before publication.


Social Media Companies Killed A California Bill To Protect Kids



California lawmakers killed a bill Thursday that would have allowed government lawyers to sue social-media companies for features that allegedly harm children by causing them to become addicted, The Wall Street Journal reported.

According to The Wall Street Journal, the measure would have given the attorney general, local district attorneys and city attorneys in the biggest California cities authority to try to hold social-media companies liable in court for features that knew or should have known could addict minors. Among those targeted could have been Facebook and Instagram parent Meta Platforms, Inc., Snapchat parent Snap Inc., and TikTok, owned by Chinese company ByteDance Ltd.

In June of 2022, Meta (parent company of Facebook and Instagram) was facing eight lawsuits filed in courthouses across the US that allege that excessive exposure to platforms including Facebook and Instagram has led to attempted or actual suicides, eating disorders and sleeplessness, among other issues. More specifically, the lawsuits claim that the company built algorithms into its platforms that lure young people into destructive behavior.

The Wall Street Journal also reported that the bill died in the appropriations committee of the California state senate through a process known as the suspense file, in which lawmakers can halt the progress of dozens or even hundreds of potentially controversial bills without a public vote, based on their possible fiscal impact.

The death of the bill comes after social media companies worked aggressively to stop the bill, arguing that it would lead to hundreds of millions of dollars in liability and potentially prompt them to abandon the youth market nationwide. Meta, Twitter Inc., and Snap all had individually lobbied against the measure according to state lobbying disclosures.

This doesn’t mean that a similar bill cannot be passed by the federal government. Politico reported earlier this month that the Commerce Committee advanced the floor considerations for two bills: It approved the Children and Teens’ Online Privacy Protection Act on a voice vote and the Kids Online Safety Act by a unanimous 28-0.

According to Politico, The Kids Online Safety Act was co-sponsored by Richard Blumenthal (Democrat – Connecticut) and Marsha Blackburn (Republican – Tennessee). That bill, if passed, would require social media platforms to allow kids and their parents to opt out of content algorithms that have fed them harmful content and disable addictive product features.

The Children and Teens’ Online Privacy Protection Act was sponsored by Bill Cassidy (Republican – Louisiana) and Ed Markey (Democrat – Massachusetts). That bill, if passed, would extend existing privacy protections for preteens to children up to age 16 and bans ads from targeting them. It would also give kids and their parents the right to delete information that online platforms have about them.

Personally, I think that parents of children and teenagers who have allowed their kids to use social media should have complete control over preventing the social media companies from gathering data on their children. Huge social media companies need to find other ways of sustaining revenue that doesn’t involved mining underage people in the hopes of gaining money from ads.