Tag Archives: Instagram

Instagram Provided Ways To Keep Your Account Safe



Instagram (which is owned by parent company Meta) announced that they are committed to fostering a safe and supportive community for everyone who uses Instagram. There are some easy things you can do to help keep your account safe, like making sure you have a strong password and enabling two-factor authentication.

Instagram has highlighted several new features designed to help keep people’s accounts safe, and offer them support if they lose account access.

Additional Account Support

To support accounts that are experiencing access issues or may have been hacked, Instagram created instagram.com/hacked – a new, comprehensive destination people can rely on to report and resolve account access issues.

If you are unable to log in to your account, enter instagram.com/hacked on your mobile phone or desktop browser. Next, you will be able to select if you think you’ve been hacked, forgot your password, lost access to two-factor authentication or if your account has been disabled. From there, you will be able to follow a series of steps to help regain your account.

Earlier this year, Instagram started testing a way for people to ask their friends to confirm their identity in order to regain access to their account, and this option is now available to everyone on Instagram. If you find yourself locked out of your account, you will be able to choose two of your Instagram friends to verify your identity and get back into your account.

Keeping Your Account Secure

Instagram is testing ways to help prevent hacking on Instagram before it happens. First, they remove accounts that their automated systems find to be malicious, including ones that impersonate others, which goes against our Community Guidelines. Second, because bad actors often don’t immediately use accounts maliciously, we’re now testing sending warnings if an account that we suspect may be impersonating someone requests to follow you.

Engadget reported that Instagram created a hub where people can go to report and resolve account access issues they’re having. Engadget noted that this could be hugely beneficial for hacked users who are struggling to regain access to their accounts.

In addition, Engadget reported that if you get locked out of an account, you can get two Instagram friends to verify your identity. This feature was tested out earlier this year and is now available to everyone. The two friends that you select to help verify you will have 24 hours to respond to the request. If they do, Instagram will let you reset your password.

In my opinion, these changes made by Instagram are a step in the right direction. According to The Verge, Instagram users whose Instagram accounts were stolen by hackers had to pay a ransom in order to get it back. Some had to turn to other hackers for help. It is good that Instagram is doing something to prevent that problem from happening.


Meta’s Oversight Board Criticizes ‘Cross Check’ Program For VIPs



Meta Platforms Inc. has long given unfair deference to VIP users of its Facebook and Instagram services under a program called “cross check” and has misled the public about the program, the company’s oversight board concluded in a report issued Tuesday, The Wall Street Journal reported.

According to The Wall Street Journal, the report offers the most detailed review of cross check, which Meta has billed as a quality-control effort to prevent moderation errors on content of heightened public interest. The oversight board took up the issue more than a year ago in the wake of a Wall Street Journal article based on the internal documents that showed that cross check was plagued by favoritism, mismanagement and understaffing.

Meta’s Oversight Board posted information titled: “Oversight Board publishes policy advisory opinion on Meta’s cross-check program”. From the information:

Key Findings: The Board recognizes that the volume and complexity of content posted on Facebook and Instagram pose challenges for building systems that uphold Meta’s human rights commitments. However, in its current form, cross-check is flawed in key areas which the company must address:

Unequal treatment of users. Cross-check grants certain users greater protection than others. If a post from a user on Meta’s cross-check lists is identified as violating the company’s rules, it remains on the platform pending further review. Meta then applies its full range of policies, including exceptions and context-specific provisions, to the post, likely increasing its chances of remaining on the platform.

Ordinary users, by contrast, are much less likely to have their content reach reviewers who can apply the full range of Meta’s rules. This unequal treatment is particularly concerning given the lack of transparent criteria for Meta’s cross-check lists. While there are clear criteria for including business partners and government leaders, users whose content is likely to be important from a human rights perspective, such as journalists and civil society organizations, have less clear paths to access the program.

Lack of transparency around how cross-check works. The Board is also concerned about the limited information Meta has provided to the public and its users about cross-check. Currently, Meta does not inform users that they are on cross check lists and does not publicly share its procedures for creating and auditing these lists. It is unclear, for example, whether entities that continuously post violating content are kept on cross-check lists based on their profile. This lack of transparency impedes the Board and the public from understanding the full consequences of the program.

NPR reported that the board said Meta appeared to be more concerned with avoiding “provoking” VIPs and evading accusations of censorship than balancing tricky questions of free speech and safety. It called for the overhaul of the “flawed” program in a report on Tuesday that included wide-ranging recommendations to bring the program in line with international principles and Meta’s own stated values.

Personally, I don’t think it is fair for Meta to pick and choose which users are exempt from Meta’s rules about what people can, and can not, post. Hopefully, the Oversight Board’s review will require Meta to treat all users equally.


Meta Releases Community Standards Enforcement



Earlier this year, Meta Platforms quietly convened a war room of staffers to address a critical problem: virtually all of Facebook’s top-ranked content was spammy, over sexualized, or generally what the company classified as regrettable, The Wall Street Journal reported.

Meta’s executives and researchers were growing embarrassed that its widely viewed content report, a quarterly survey of the posts with the broadest reach, was consistently dominated by stolen memes, engagement bait and link spam for sketchy online shops, according to documents viewed by The Wall Street Journal and people familiar with the issue.

Meta posted “Integrity and Transparency Reports, Third Quarter 2022”. It was written by Guy Rosen, VP of Integrity. Part of the report included Community Standards Enforcement Report Highlights.

It includes the following:

“Our actions against hate speech-related content decreased from 13.5 million to 10.6 million in Q3 2022 on Facebook because we improved the accuracy of our AI technology. We’ve done this by leveraging data from past user appeals to identify posts that could have been removed by mistake without appropriate cultural context.

“For example, now we can better recognize humorous terms of endearment used between friends, or better detect words that may be considered offensive or inappropriate in one context but not another. As we improve this accuracy, or proactive detection rate for hate speech also decreased from 95.6% to 90.2% in Q3 2022.”

Part of the report states that Meta’s actions against content that incites violence decreased from 19.3 million to 14.4 million in Q3 2022 after their improved AI technology was “better able to recognize language and emojis used in jest between friends.”

For bullying and harassment-related content, Meta’s proactive rate decreased in Q3 2022 from 76.7% to 67.8% on Facebook, and 87.4% to 87.3% on Instagram. Meta stated that this decrease was due to improved accuracy in their technologies (and a bug in their system that is now resolved).

On Facebook, Meta Took Action On:

1.67 million pieces of content related to terrorism, an increase from 13.5 million in Q2. This increase was because non-violating videos were added incorrectly to our media-matching technology banks and were removed (though they were eventually restored).

4.1 million pieces of drug content, an increase from 3.9 million in Q2 2022, due to improvements made to our proactive detection technology.

1.4 billion pieces of spam content, an increase from 734 million in Q2, due to an increased number of adversarial spam incidents in August.

On Instagram, Meta Took Action On:

2.2 million pieces of content related to terrorism, from 1.9 million on Q2, due to non-violating videos added incorrectly to our media-matching technology banks and were removed (though they were eventually restored).

2.5 pieces of drug content, an increase from 1.9 million, due to improvements in our proactive detection technology.

AdWeek reported that Meta removed three networks during the third quarter of this year for violations of Meta’s policies against inauthentic behavior.

According to AdWeek, the first originated in the U.S. and was linked to individuals associated with the U.S. military, and it operated across many internet services and focused on Afghanistan, Algeria, Iran, Iraq, Kazakhstan, Kyrgyzstan, Russia, Somalia, Syria, Tajikistan, Uzbekistan and Yemen.

The second one originated in China and targeted the Czech Republic, the U.S., and, to a lesser extent, Chinese- and French-speaking audiences around the world. The third originated in Russia, primarily targeting Germany but also France, Italy, Ukraine and the U.K.

It appears that Meta has put some effort into cleaning up its platforms. I suppose that’s what happens when Meta’s own researchers were “embarrassed” by what was appearing on Facebook and Instagram!


The Wire Redacted Its Claims About Meta’s Instagram



Recently, The Wire and Meta appeared to be fighting with each other over whether or not Meta was removing posts from Instagram that were reported by a specific person. The Wire made more than one post about this. Meta countered with a post on its Newsroom titled “What The Wire Reports Got Wrong”.

Today, The Wire posted the following statement:

“Earlier this week, The Wire announced its decision to conduct an internal review of its recent coverage of Meta, especially the sources and materials involved in our reporting.

“Our investigation, which is ongoing, does not at yet allow us to take a conclusive view about the authenticity and bona fides of the sources with whom a member of our reporting team says he has been in touch with over an extended period of time. However, certain discrepancies have emerged in the material used. These include the inability of our investigators to authenticate both the email purportedly sent from a***** @fb . Com as well as the email purportedly received from Ujjwal Kumar, (an expert cited in the reporting as having endorsed one of the findings, but who has, in fact, categorically denied sending such an email.) As a result, The Wire believes it is appropriate to retract the stories.

“We are still reviewing the entire matter, including the possibility that it was deliberately sought to misinform or deceive The Wire.

“Lapses in editorial oversight are also being reviewed, as are editorial roles, so that failsafe protocols are put into place ensuring the accuracy of all source-based reporting.

“Given the discrepancies that have come to our attention via our review so far, The Wire will also conduct a thorough review of previous reporting done by the technical team involved in our Meta coverage, and remove the stories from public view until that process is complete…”

Previous to the redaction from The Wire, Meta posted about the situation in its Newsroom blog. Meta wrote the following:

“Two articles published by The Wire allege that a user whose account is cross-checked can influence decisions on Instagram without any review. Our cross-check program was built to prevent potential over-enforcement mistakes and to double-check cases where a decision could require more understanding or there could be a higher risk for a mistake. To be clear, our cross-check program does not grant enrolled accounts the power to automatically have content removed from our platform”.

Meta also wrote that the claims in The Wire’s article was “based on allegedly leaked screenshots from our internal tools. We believe this document is fabricated.” Meta also stated that The Wire’s second story cites emails from a Meta employee – and claimed that the screenshot included in the story has two emails. Meta said both were fake.

It is unclear who, exactly, fed misinformation to The Wire regarding Meta’s Instagram interactions. What is abundantly clear is that the person – or persons – appear to have fabricated what might be false claims. It is unfortunate that The Wire didn’t catch that before publication.


Social Media Companies Killed A California Bill To Protect Kids



California lawmakers killed a bill Thursday that would have allowed government lawyers to sue social-media companies for features that allegedly harm children by causing them to become addicted, The Wall Street Journal reported.

According to The Wall Street Journal, the measure would have given the attorney general, local district attorneys and city attorneys in the biggest California cities authority to try to hold social-media companies liable in court for features that knew or should have known could addict minors. Among those targeted could have been Facebook and Instagram parent Meta Platforms, Inc., Snapchat parent Snap Inc., and TikTok, owned by Chinese company ByteDance Ltd.

In June of 2022, Meta (parent company of Facebook and Instagram) was facing eight lawsuits filed in courthouses across the US that allege that excessive exposure to platforms including Facebook and Instagram has led to attempted or actual suicides, eating disorders and sleeplessness, among other issues. More specifically, the lawsuits claim that the company built algorithms into its platforms that lure young people into destructive behavior.

The Wall Street Journal also reported that the bill died in the appropriations committee of the California state senate through a process known as the suspense file, in which lawmakers can halt the progress of dozens or even hundreds of potentially controversial bills without a public vote, based on their possible fiscal impact.

The death of the bill comes after social media companies worked aggressively to stop the bill, arguing that it would lead to hundreds of millions of dollars in liability and potentially prompt them to abandon the youth market nationwide. Meta, Twitter Inc., and Snap all had individually lobbied against the measure according to state lobbying disclosures.

This doesn’t mean that a similar bill cannot be passed by the federal government. Politico reported earlier this month that the Commerce Committee advanced the floor considerations for two bills: It approved the Children and Teens’ Online Privacy Protection Act on a voice vote and the Kids Online Safety Act by a unanimous 28-0.

According to Politico, The Kids Online Safety Act was co-sponsored by Richard Blumenthal (Democrat – Connecticut) and Marsha Blackburn (Republican – Tennessee). That bill, if passed, would require social media platforms to allow kids and their parents to opt out of content algorithms that have fed them harmful content and disable addictive product features.

The Children and Teens’ Online Privacy Protection Act was sponsored by Bill Cassidy (Republican – Louisiana) and Ed Markey (Democrat – Massachusetts). That bill, if passed, would extend existing privacy protections for preteens to children up to age 16 and bans ads from targeting them. It would also give kids and their parents the right to delete information that online platforms have about them.

Personally, I think that parents of children and teenagers who have allowed their kids to use social media should have complete control over preventing the social media companies from gathering data on their children. Huge social media companies need to find other ways of sustaining revenue that doesn’t involved mining underage people in the hopes of gaining money from ads.


Instagram And Facebook Can Track You Through Sketchy Methods



Are you using Instagram and/or Facebook (Meta) on your iOS phone? If so, you might want to stop doing that. Felix Krause provided detailed information that, to me, sounds like those apps can track you on your phone even if you’ve told them not to. It is done in a sketchy way that most people won’t immediately recognize.

I recommend you read Felix Krause’s entire blog post. It made me reconsider using the Instagram app on my phone. (I stopped using Facebook ages ago).

What Instagram (and Facebook and Meta) do:

  • Links to external websites are rendered inside the Instagram app, instead of using the built-in Safari
  • This allows Instagram to monitor everything happening on external websites, without the consent from the user, nor the website provider
  • The Instagram app injects their JavaScript code into every website shown, including when clicking on ads. Even though pcm.js doesn’t do this, injecting custom scripts into third party websites allows them to monitor all user interactions, like every button & link tapped, text selections, screenshots, as well as any form inputs like passwords, addresses and card numbers.

According to Felix Krause, Meta (Facebook, Instagram) is losing money due to Apple’s App Tracking Transparency. You may recall that 96% of iOS users in the U.S. opted out of App Tracking almost immediately after it became available. The vast majority of Apple users don’t want to be tracked.

It is my understanding that Meta (etc.) heavily relies on making money from advertisements that users click on or visit the website of. It can’t do that anymore, thanks to the efforts by Apple, including Safari’s ability to block third party cookies by default. Firefox announced Total Cookie Protection by default to prevent any cross-page tracking. Google Chrome will soon phase out third party cookies.

In my opinion, Meta is desperately clinging to what worked for them in the past, as their ad revenue dries up. Those who click on a link in Instagram on an ad that caught their attention likely had no idea that the browser it opened was altered by Meta. It’s a sketchy move, and no company should be doing that, and especially not to iOS users who opted to prevent Meta from tracking them.

The Guardian reported that Meta, the owner of Facebook and Instagram, has been rewriting websites it lets users visit, letting the company follow them across the web after they click links in its apps. According to The Guardian, the two apps have been taking advantage of the fact that users who click on links are taken to webpages in an “in-app-browser,” controlled by Facebook or Instagram.

As for me, I’m going to avoid using the Instagram app in favor of scrolling through it on my desktop computer. It seems much safer than allowing Meta to substitute the browser of its choice instead of mine – and for Meta’s own benefit.


Instagram Rolls Searchable Map Of Nearby Businesses



Instagram’s latest update aims to make it easier for users to find local businesses or attractions by adding a searchable map that lets you “discover popular local businesses near you”, according to an Instagram Story from Mark Zuckerberg, The Verge reported.

The map will show you a list of places nearby and will let you see posts about a certain place or only certain types of business.

How do you get to the map? The Verge reported that there are a few ways to do that – if someone tags a place in a post or story, you can tap on the tag and hit “see location” to get to the location’s page. If you move around on the map, you’ll then be able to search the area to see what’s nearby. You can also search for places (including entire cities) in the Explore tab. Tabbing on a place search result will take you to it on the map.

The Verge also reported that after you have searched an area, you can use filters to narrow down the search result so you only see restaurants, bars, parks, or other types of places. You can save locations to check them out later.

TechCrunch reported that Instagram is introducing a new searchable and dynamic map experience on Instagram. The updated map experience will allow users to explore popular tagged locations around them and filter location results by specific categories, including restaurants, cafes, and beauty salons.

According to TechCrunch, CEO of Facebook Mark Zuckerberg posted on Instagram: “We’re introducing a new searchable map in IG today. You can now discover popular local businesses near you and filter by categories”. The post incudes what the map looks like. There is a “share” button at the top of the map, helpfully pointed out with an arrow that comes from the text.

Hashtag search is also available for local hashtags, such as #sanfrancisco. If your Instagram account is public, you can use location tags or stickers in your content to make it appear on the map for others to see.

Why is Instagram offering this feature now? According to TechCrunch, Google’s Senior Vice President Prabhakar Raghavan somewhat offhandedly noted that younger users are now often turning to apps like Instagram and TikTok instead of Google Search or Maps for discovery purposes. Perhaps Instagram realized that they need their own, searchable, sharable, map for the young people who use its app.

The Searchable Map follows Instagram’s recent addition of allowing users to buy products from small businesses directly through the app. People can pay with Meta Pay and track their order in chat on Instagram in the US. The payment system is PayPal, which can sometimes be problematic for sellers who are hit by a scammer.