Tag Archives: Twitter

Twitter Added “Manipulated” Tag on Altered Video of Joe Biden



You may have seen a video on Twitter featuring Joe Biden, in which he appears to say “We can only re-elect Donald Trump”. It turns out that video was altered in order to make it sound like that was what he said. Twitter responded by adding a “Manipulated Media” tag to the video. The tag will immediately alert those who watch the video that it has been manipulated.

Twitter’s Synthetic and Manipulated Media policy states the following:

You may not deceptively share synthetic or manipulated media that are likely to cause harm. In addition, we may label Tweets containing synthetic and manipulated media to help people understand their authenticity and to provide additional context.

When Twitter has reason to believe that media shared in a Tweet has been “significantly and deceptively altered or fabricated”, they will do one or all of the following:

  •  Apply a label to the content where it appears in the Twitter product
  •  Show a warning to people before they share or like the content
  •  Reduce visibility of the content on Twitter and/or prevent it from being recommended
  •  Provide a link to additional explanations or clarifications, such as in a Twitter Moment or landing page.

CNN reported that what Joe Biden actually said was: “ Excuse me. We can only re-elect Donald Trump if in fact we get engaged in this circular firing squad here. It’s gotta be a positive campaign.” The manipulated video that was shared on Twitter cut off Joe Biden’s sentence in order to make it appear that he said, “We can only re-elect Donald Trump.” In other words, the manipulated video provided misinformation to those who viewed it.

Washington Post tech policy reporter Cat Zakrzewski tweeted: “Just in: Twitter applied its new manipulated video label for the first time to a deceptively edited video of Joe Biden. It was shared by White House social media director Dan Scavino, and retweeted by the President”.

The tweet shows a screenshot of Dan Scavino’s tweet in which the video was posted. Below the video is an exclamation point inside a circle, next to the words “Manipulated media.”

To me, Twitter is doing the right thing in regards to this video. It is not okay for people to intentionally falsify information about a politician during their campaign. Manipulated video confuses voters because it isn’t always immediately apparent that what they are watching has been altered. Those who feel the need to create lies in order to win an election aren’t going to get away with it on Twitter anymore.


Twitter Expands Rules Against Hateful Conduct to Include Disease



Twitter updated its rules against hateful conduct. In July of 2019, Twitter expanded their rules against hateful conduct to include language that dehumanizes others on the basis of religion. Now, it has further expanded the rules to include language that dehumanizes others on the basis of age, disability, or disease.

TechCrunch reported that Twitter’s hateful conduct policy also includes a ban on dehumanizing speech across many categories including race, ethnicity, national origin, caste, sexual orientation, gender, and gender identity.

Twitter provided some examples of tweets that would break their rule against hateful conduct:

  •  [Religious Group] should be punished. We are not doing enough to rid us of those filthy animals.
  • All [Age Group] are leaches and don’t deserve any support from us.
  •  People with [Disability] are subhuman and shouldn’t be seen in public.
  •  People with [Disease] are rats that contaminate everyone around them.

If you aren’t sure whether or not the thing you are about to tweet breaks Twitter’s hateful conduct use, use the Twitter-provided examples above as a template. If your tweet is similar to those examples, you probably shouldn’t post it.

Twitter will require tweets like these to be removed from Twitter when they’re reported to them. If reported, tweets that break this rule pertaining to age, disease, and/or disability, sent before March 5, 2020, will need to be deleted, but will not directly result in account suspensions. Tweets that break the rule that were posted after March 5, 2020, could result in suspensions.

Personally, I think this is good policy. I remember the experience of using Twitter as being a whole lot nicer when it was launched than it is today. It is entirely possible to talk about age, disability, disease, and/or religion without dehumanizing people. If it’s too hard for you to use Twitter without dehumanizing people – then you shouldn’t be on Twitter anymore.


Tech Companies Want Staff to Work from Home Due to Coronavirus



It is a smart decision to do everything possible to limit the spread of coronavirus. Big tech companies are using the strategy of asking their employees to work from home. This may be a temporary decision, but I think the move could help normalize working from home.

The Verge reported that numerous tech companies have asked their Seattle-based employees to work from home to help prevent the spread of coronavirus. This includes Amazon, Google, Facebook, Microsoft, Twitter, and Bungie.

Microsoft is allowing and encouraging its employees based in Seattle or San Francisco to work from home. These employees can work from home through March 25, 2020.

CNBC reported that Amazon is asking employees at its Seattle and Bellevue, Washington, offices to work from home (if they are able to) until the end of the month. This decision was made after an employee tested positive for coronavirus. Amazon has also restricted all nonessential U.S. travel in response to coronavirus.

CNBC also reported that Facebook encouraged all of its 5,000 employees in Seattle to work from home for the rest of the month. Facebook has closed its Seattle office until Monday.

Twitter announced that it is strongly encouraging all employees globally to work from home if they’re able. Working from home will be mandatory for employees based in Twitter’s Hong Kong, Japan, and South Korea offices (due in part to government restrictions). Interestingly, Twitter had already begun moving towards a more distributed workforce that’s increasingly remote.

Bungie stated that it has built a fully remote infrastructure for all Bungie employees across the globe, with the goal of prioritizing the safety of their employees.

My hope is that these moves will help to normalize working from home. Employees would no longer have spend time commuting, and could spend those hours with their families. They could reduce the amount they spend of gas each week. Workers could do their job without the risk of catching the next “office cold” or the flu.


Facebook and Twitter Removed Accounts Engaging in Inauthentic Behavior



Both Facebook and Twitter have announced that they have removed networks of accounts that were engaging in inauthentic behavior. The New York Times reported that the accounts used fake profile photos that were generated with artificial intelligence. The use of AI generated fake photos appears to be a new tactic.

Facebook announced that they removed two unconnected networks of accounts, Pages and groups for engaging in foreign and government interference. The first operation originated in the country of Georgia and targeted domestic audiences. Facebook removed 39 Facebook accounts, 344 Pages, 13 Groups and 22 Instagram accounts that were part of this group.

The second operation originated in Vietnam and the US and focused mainly on the US and some on Vietnam and Spanish and Chinese-speaking audiences globally. Facebook removed 610 accounts, 89 Facebook Pages, 156 Groups and 72 Instagram accounts that originated in Vietnam and the US and focused primarily on the US and some on Vietnam, Spanish and Chinese-speaking audiences globally.

Some of these accounts used profile photos generated by artificial intelligence and masqueraded as Americans to join Groups and post the BL content. To evade our enforcement, they used a combination of fake and inauthentic accounts of local individuals in the US to manage Pages and Groups. The page admins and account owners typically posted memes and other content about US political news and issues including impeachment, conservative ideology, political candidates, elections, trade, family values, and freedom of religion.

Facebook said its investigation linked this coordinated group to Epoch Media Group. The New York Times reported that Epoch Media Group is the parent company of the Falun Gong-related publication and conservative news outlet The Epoch Times. The Epoch Media Group has denied that it is linked to the network.

Twitter announced it removed 5,929 accounts for violating Twitter’s platform manipulation policies. Their investigation attributed these accounts to “a significant state-backed information operation” originating in Saudi Arabia.

The accounts represent the core portion of a larger network of more than 88,000 accounts engaged in spammy behavior across a wide range of topics. Twitter’s investigations traced the source of the coordinated activity to Smaat, a social media marketing and management company based in Saudi Arabia.

It is very important to realize that you cannot believe everything you see on social media. An account that appears to have a realistic photo could actually be one that was generated by AI. Do some fact checking before sharing things posted by accounts that are run by people you don’t know.


Twitter will Label and Warn about Deepfakes, but won’t Remove them



Twitter announced in October of this year that they are working on a new policy to address synthetic and manipulated media (also called “deepfakes”). Today, Twitter presented a draft of what they plan to do when they see manipulated media that purposely tries to mislead or confuse people.

Based on conversations with experts and researchers, Twitter proposes that synthetic and manipulated media be defined as: “any photo, audio, or video that has been significantly altered or fabricated in a way that intends to mislead people or changes its original meaning.” Twitter notes that these are sometimes referred to as deepfakes or shallowfakes.

You may have seen some examples of this on social media. There was an altered video passed around of U.S. Speaker of the House of Representatives Nancy Pelosi, which was made to appear as though she was slurring her words. There is also a video where someone took faces from well-known paintings and made it look as if the faces were speaking.

Twitter made a draft policy regarding deepfakes, in which Twitter may:

  • Place a notice next to Tweets that share synthetic or manipulated media
  • Warn people before they share or like Tweets with synthetic or manipulated media; or
  • Add a link – for example, to a news article or Twitter Moment – so that people can read more about why various sources believe the media is synthetic or manipulated.

In addition, Twitter may remove tweets that include synthetic or manipulated media that is misleading and could threaten someone’s physical safety or lead to other serious harm. It appears that other than this exception, Twitter is intending to allow deepfakes to spread. Twitter has a survey for people who want to to provide feedback about this draft policy.

Personally, I don’t think Twitter’s draft policy will be very effective. Those who view deepfakes that match their opinions or political views are unlikely to accept that what they see has been altered. Warning people that they are about to like or share a deepfake isn’t going to deter those who think the deepfake is more believable than reality, and who think that Twitter is “censoring” content.


Facebook and YouTube are Removing Alleged Name of Whistleblower



It is stunning how much damage people can do by posting the (potential) name of a whistleblower on social media, and having that name be passed around. This poses a dilemma for social media platforms. Both Facebook and YouTube are deleting content that includes the alleged name of the whistleblower that sparked a presidential impeachment inquiry. Twitter is not.

The New York Times reported a statement they received in an email from a Facebook spokeswoman:

“Any mention of the potential whistleblower’s name violates our coordinating harm policy, which prohibits content ‘outing of witness, informant or activist’,” a Facebook spokeswoman said in an emailed statement. “We are removing any and all mentions of the potential whistleblower’s name and will revisit this decision should their name be widely published in the media or used by public figures in debate.”

The New York Times reported that an article that included the alleged name of the whistleblower was from Brietbart. This is interesting, because Breitbart is among the participating publications that Facebook included in Facebook’s “high quality” news tab. (Other publications include The New York Times, the Washington Post, Wall Street Journal, BuzzFeed, Bloomberg, ABC News, Chicago Tribune and Dallas Morning News.) Facebook has been removing that article, which indicates that the company does not feel the article is “high quality”.

CNN reported that a YouTube spokesperson said videos mentioning the potential whistleblower’s name would be removed. The spokesperson said YouTube would use a combination of machine learning and human review to scrub the content. The removals, the spokesperson said, would affect the titles and descriptions of videos as well as the video’s actual content.

The Hill reported that Twitter said in a statement that it will remove posts that include “personally identifiable information” on the alleged whistleblower, such as his or her cell phone number or address, but will keep up tweets that mention the name.


Twitter will Show More Ads to Users with High Follower Counts



Twitter’s third-quarter earnings were not as good as expected, and the company has decided to blame it on technology that helps advertisers promote mobile apps on the platform. Possibly as a result of this situation, Twitter has decided to show more ads to users who have high-follower counts.

CNBC reported that Twitter’s “Mobile Application Promotion” (MAP) suite of products that helps advertisers promote mobile apps on the platform, including app installs, conversions, or engagements on apps, had technological issues. This is, apparently, why Twitter’s shares went down as much as 20% after the third-quarter earnings were announced.

The details about what was happening with MAP are sketchy (in my opinion). According to CNBC, Twitter said it inadvertently used information that users wanted to be private as a way of serving ads to them, including their device data.

For example, Twitter gives advertisers the opportunity to target based on the devices they’re using to access the platform. They can reach audiences based on a version of their operating system, a specific device, WiFi connectivity, mobile carrier and whether a device is new, which indicates they might be more on the hunt for new apps or services.

Before Twitter lets the advertisers get grabby with all that data, the MAP was supposed to ask the user for permission first. That’s not what actually happened, though. Twitter Chief Financial Officer Ned Segal said: “That setting wasn’t working as expected.” Twitter was using those device settings “even if people had asked us not to.”

What is Twitter going to do to fix this problem? It says it has an improved MAP in the works, but it doesn’t know when that will be ready.

CNBC reported that in recent weeks, Twitter users who have a high-follower count have been commenting that they were seeing more ads than before. Twitter admitted that, in the past, it showed fewer (or no) ads to those accounts. But now, Twitter is going to show them more ads. Perhaps those who are annoyed with ads will start using ad blockers or browsers that have ad blockers built in.