Tag Archives: YouTube

YouTube is Making Dislike Counts Private



YouTube announced it will be making the dislike counts private on YouTube. The dislike button itself will remain, but only the creator of the video will see how many dislikes a video got. This change will be rolling out gradually – starting today.

This decision by YouTube to change the way the dislike button is used was not done on a whim. It comes after an experiment with the dislike button, with the idea of seeing what happens if the number of dislikes was hidden from viewers. YouTube noted that the experiment ended on November 10, 2021.

Here is some explanation from YouTube about the change to the dislike button:

As part of an experiment, viewers could still see and use the dislike button. But because the count was not visible to them, we found that they were less likely to target a video’s dislike button to drive up the count. In short, our experiment data showed a reduction in dislike attacking behavior. We also heard directly from smaller creators and those just getting started that they are unfairly targeted by this behavior – and our experiment confirmed that this does occur at a higher proportion on smaller channels.

YouTube has made it clear that viewers can still use the dislike button. The difference is they won’t see how many other people have used it. The number of dislikes will only be viewable by the creator of the video through YouTube Studio.

YouTube also stated the following on their blog post:

We want to create an inclusive and respectful environment where creators have the opportunity to succeed and feel safe to express themselves. This is just one of many steps we are taking to continue to protect creators from harassment. Our work is not done, and we’ll continue to invest here.

As a person who puts their gameplay videos on YouTube – I am in favor of this change. It is always good when a company chooses to make an effort to prevent harassment. I’m hoping this change will make the angriest of commenters decide that using the dislike button isn’t fun anymore.

There is some evidence that removing the numbers from the dislike button works. The Verge reported: YouTube says that when it tested hiding dislike numbers, people were less likely to use the button to attack the creator – commenting “I just came here to dislike” was seemingly less satisfying when you don’t actually get to see the number go up.


YouTube will Remove Content that Alleges Widespread Election Fraud



In a lengthy blog post, YouTube announced updates to their work supporting the integrity of the 2020 U.S. election. This includes removing content that violates their policies. In addition to things that YouTube was already removing, the company will now remove content that alleges widespread election fraud.

Yesterday was the safe harbor deadline for the U.S. Presidential election and enough states have certified their election results to determine a President-elect. Given that, we will start removing any piece of content uploaded today (or anytime after) that misleads people by alleging that widespread fraud or errors changed the outcome of the 2020 U.S. Presidential elections, in line with our approach towards historical U.S. Presidential elections.

As an example, YouTube pointed out that they will remove videos claiming that a Presidential candidate won the election due to widespread software glitches or counting errors. News coverage and commentary on these issues can remain on YouTube if there’s sufficient education, documentary, scientific, or artistic context.

Part of YouTube’s blog post mentions that since Election Day, relevant fact check information panels, from third party fact checkers, were triggered over 200,000 times above relevant election-related search results, including for voter fraud narratives such as “Dominion voting machines” and “Michigan recount.”

The Hill reported that YouTube’s move does not appear to involve the removal of any content fitting that description if it was uploaded before Wednesday. The Hill also stated that YouTube has said that since September it has terminated more than 800,000 channels and “thousands of harmful and misleading elections-related videos” for violating its existing content policies.

Personally, I think YouTube is making good decisions about what to remove. The recounts are over. The courts have dismissed many (if not all) of President Trump’s election related lawsuits. Many states have certified their election results. There is no reason for YouTube to host misleading election related content anymore.


YouTube Takes Stronger Stance Against Personal Attacks



YouTube announced a series of policy and product changes that update how they will tackle harassment on the platform. It includes a stronger stance against threats and personal attacks, and consequences for those who engage in harassing behavior.

YouTube will now prohibit explicit threats and veiled or implied threats. This includes content that simulates violence towards an individual or language suggesting physical violence may occur. In addition, YouTube will no longer show content that maliciously insults someone based on protected attributes such as race, gender expression, or sexual orientation.

Something we heard from our creators is that harassment sometimes takes the shape of a pattern of repeated behavior across multiple videos or comments, even if any individual video doesn’t cross our policy line. To address this, we’re tightening our policies for the YouTube Partner Program (YPP) to get even tougher on those who engage in harassing behavior and to ensure we only reward trusted creators. Channels that repeatedly brush up against our harassment policy will be suspended from YPP, eliminating their ability to make money on You Tube. We may also remove content from channels if they repeatedly harass someone. If this behavior continues, we’ll take more severe action including issuing strikes or terminating a channel altogether.

In addition, YouTube will remove comments that clearly violate their policies. They will also give creators the option to review a comment before it is posted on their channel. Last week, YouTube began turning on this feature by default for YouTube’s largest channels with the site’s most active comment sections. It will roll out to most channels by the end of the year.

BuzzFeed News reported about a situation that happened on YouTube earlier this year between Stephen Crowder and Carlos Maza. According to BuzzFeed News, YouTube decided, after a review, that some of Crowder’s content crossed a line and will be removed from the platform.

Personally, I think that cracking down on harassment can only be a good thing. Nobody enjoys being the target of harassment, and I can see where that experience could cause a person to stop posting videos on YouTube. I really like that YouTube will kick repeat harassers out of the YPP program. Taking away the ability for a bully to make money on YouTube could be an effective deterrent.


Facebook and YouTube are Removing Alleged Name of Whistleblower



It is stunning how much damage people can do by posting the (potential) name of a whistleblower on social media, and having that name be passed around. This poses a dilemma for social media platforms. Both Facebook and YouTube are deleting content that includes the alleged name of the whistleblower that sparked a presidential impeachment inquiry. Twitter is not.

The New York Times reported a statement they received in an email from a Facebook spokeswoman:

“Any mention of the potential whistleblower’s name violates our coordinating harm policy, which prohibits content ‘outing of witness, informant or activist’,” a Facebook spokeswoman said in an emailed statement. “We are removing any and all mentions of the potential whistleblower’s name and will revisit this decision should their name be widely published in the media or used by public figures in debate.”

The New York Times reported that an article that included the alleged name of the whistleblower was from Brietbart. This is interesting, because Breitbart is among the participating publications that Facebook included in Facebook’s “high quality” news tab. (Other publications include The New York Times, the Washington Post, Wall Street Journal, BuzzFeed, Bloomberg, ABC News, Chicago Tribune and Dallas Morning News.) Facebook has been removing that article, which indicates that the company does not feel the article is “high quality”.

CNN reported that a YouTube spokesperson said videos mentioning the potential whistleblower’s name would be removed. The spokesperson said YouTube would use a combination of machine learning and human review to scrub the content. The removals, the spokesperson said, would affect the titles and descriptions of videos as well as the video’s actual content.

The Hill reported that Twitter said in a statement that it will remove posts that include “personally identifiable information” on the alleged whistleblower, such as his or her cell phone number or address, but will keep up tweets that mention the name.


YouTube Apologized for Changing Their Verification Program



CEO of YouTube Susan Wojcicki apologized to YouTube creators after the company received negative responses to YouTube’s new verification program. As a result, the new look for the verification badge will be delayed and will roll out next year. It is unclear exactly when that will happen. “Next year” could mean a few months from now.

CEO Susan Wojciki tweeted: “To our creators & users – I’m sorry for the frustration & hurt that we caused with our new approach to verification. While trying to make improvements, we missed the mark. As I write this, we’re working to address your concerns & we’ll have more updates soon.”

The updates were added that same day, on the YouTube’s Creator Blog titled: “Updates to YouTube’s verification program”. It is an attempt to clarify the changes to the verification badge. YouTube says the idea behind the update was to protect creators from impersonation and address user confusion. What were users confused about?

Also, nearly a third of YouTube users told us that they misunderstood the badge’s meaning, associating it with endorsement of content, and not an indicator of identity. While rolling out improvements to this program, we completely missed the mark. We’re sorry for the frustration that this caused and have a few updates to share.

Here are some things to know:

  • Channels that already have the verification badge will keep it and do not have to appeal.
  • All channels that have over 100,000 subscribers will still be eligible to apply for the verification badge. YouTube will reopen the application process by the end of October.
  • YouTube will verify channels that have over 100,000 subscribers The channel must also be authentic – meaning the channel represents the real creator, brand, or entity it is claiming to be. The channel must also be complete, meaning that it is public and has a description, channel icon, and content, and be active on YouTube.

To me, it seems like YouTube is very concerned about inauthentic channels that attempt to impersonate YouTube creators. That’s a good thing for YouTube to take action on.

The clarification that the badge means “an indicator of identity”, and not a sign that YouTube endorses the content on that channel, is sketchy. It feels like a loophole to allow YouTube to avoid taking responsibility for the worst content that appears on their platform.


YouTube Shares its Plans for Removing Harmful Content



YouTube posted information on its official blog about its plan for removing harmful content. It is part of YouTube’s effort to live up to their responsibility while preserving the power of an open platform.

The plan consists of four principles:

  • Remove content that violates YouTube’s policy as quickly as possible
  • Raise up authoritative voices when people are looking for breaking news and information
  • Reward trusted, eligible creators and artists
  • Reduce the spread of content that brushes right up against YouTube’s policy line.

Over the next several months, YouTube will provide more detail on the work they are doing to support these principals. The first focus is on “Remove”. They have been removing harmful content since YouTube started, but accelerated it in recent years.

Because of this ongoing work, over the last 18 months, we’ve reduced views on videos that are later removed for violating our policies by 80%, and we’re continuously working to reduce this number further.

After reviewing a policy, YouTube may discover that fundamental changes aren’t needed. But, if the review uncovers areas that are confusing to the community, YouTube clarifies their existing guidelines.

For example, YouTube provided more detail about when a “challenge” is too dangerous for YouTube. YouTube’s hate speech update was launched in early June. YouTube says the profound impact of their hate speech policy is already evident. In April, they announced they are updating their harassment policy, including creator-on-creator harassment.

Another thing YouTube is doing, in addition to human reviewers, is the use of machine learning technology to help detect potentially violative content. In addition, YouTube is removing content that breaks its rules before that content is widely viewed, or even viewed at all. More than 80% of auto-flagged videos were removed before they received a single view in the second quarter of 2018.

For example, YouTube notes that the nearly 30,000 videos they removed for hate speech over the last month generated just 3% of the views that knitting videos did over the same time period. Personally, I love that something as creative and informative as knitting videos are getting so many more views than the awful videos that include hate speech.


YouTube Changes Manual Content ID Claiming Policies



YouTube has announced additional changes to its manual claiming policies that are intended to improve fairness in the creator ecosystem, while still respecting owners’ rights to prevent unlicensed use of content. This balancing act may, or may not, work out as people might hope it would.

One concerning trend we’ve seen is aggressive manual claiming of very short music clips used in monetized videos. These claims can feel particularly unfair, as they transfer all revenue from the creator to the claimant, regardless of the amount of music claimed. A little over a month ago, we took a first step in addressing this by requiring copyright owners to provide timestamps for all manual claims so you know exactly which part of your video is being claimed. We also made updates to our Creator Studio that allow you to use those timestamps to remove manually claimed content from your videos, automatically releasing the claim and restoring monetization.

YouTube is now announcing new changes to their manual claiming processes. Here are some key points:

  • Including someone else’s content without permission means your video can still be claimed and copyright owners will still be able to prevent monetization or block the video from being viewed.
  • YouTube will forbid copyright holders from using the Manual Claiming tool to monetize creator videos with very short or unintentional uses of music.
  • Copyright claims created by the Content ID match system, which are the vast majority, are not impacted by this policy.
  • Enforcement of these policies begins in mid-September. After that, copyright owners who repeatedly fail to adhere to the policies will have their access to Manual Claiming suspended.

Interestingly, YouTube points out: “Without the option to monetize, some copyright owners may choose to leave very short or unintentional uses unclaimed”. Creators can safely use the music and sound effects in the YouTube Audio Library.

From this, it sounds to me as though YouTube is fed up with copyright holders who act in predatory ways. They shouldn’t get to take the creator’s entire revenue from a long video just because a few seconds of a song is in it. Separating these claims from financial rewards is a good idea.