Google is trying to monitor and delete hate speech from its popular YouTube channel with new rules and warnings; however, the company’s bottom line–literally–is the number of clicks that feed its revenue stream.
An insightful post by Mathew Ingram, Columbia Journalism Review, dissects twin opposing goals of Google in deleting hate speech via algorithm and human moderators while maintaining audience engagement. That engagement includes opinions many might find offensive.
Ingram cites interviews with former YouTube staffers that suggest Google cares as much about how long users spend on the site, regardless of content, as the offensive nature of that content.
In its latest effort against hate speech, Google reports that it took down more than 100,000 videos and 17,000 channels for violating hate speech rules. Also deleted were more than 500 million comments.
Offensive content is screened via algorithm and human moderators.
Interpersonal Divide has covered these hit-and-miss methods in previous posts, including one last year, titled “Violence, Bias, Hate: What Algorithms Miss and Why You Should Care.”
Nonetheless, the latest crackdown has merit.
According to its official blog, the company has “a longstanding policy against hate speech,” specifically targeting supremacist content. The latest initiative prohibits “videos alleging that a group is superior in order to justify discrimination, segregation or exclusion based on qualities like age, gender, race, caste, religion, sexual orientation or veteran status.” The policy bans content that promotes Nazi ideology, or denies documented events, such as the Holocaust or the shooting at Sandy Hook Elementary.
The company’s community rules also bans:
- Nudity or sexual content. “Also, be advised that we work closely with law enforcement and we report child exploitation.”
- Harmful or dangerous content. “Videos showing such harmful or dangerous acts may get age-restricted or removed depending on their severity.”
- Hateful content. “[W]e don’t support content that promotes or condones violence against individuals or groups based on race or ethnic origin, religion, disability, gender, age, nationality, veteran status, or sexual orientation/gender identity.”
- Violent or graphic content. “It’s not okay to post violent or gory content that’s primarily intended to be shocking, sensational, or gratuitous.”
- Harassment and cyberbullying. “If harassment crosses the line into a malicious attack it can be reported and may be removed.”
- Spam, misleading metadata, and scams. “Don’t create misleading descriptions, tags, titles, or thumbnails in order to increase views.”
- Copyright. Respect copyright. Only upload videos that you made or that you’re authorized to use. This means don’t upload videos you didn’t make.”
- Privacy. “If someone has posted your personal information or uploaded a video of you without your consent, you can request removal of content based on our Privacy Guidelines.”
- Impersonation. “Accounts that are established to impersonate another channel or individual may be removed under our impersonation policy.”
- Child Safety. “Also, be advised that we work closely with law enforcement and we report child endangerment.”
Interpersonal Divide discusses these and related issues involving technology at home, school and work. Here’s an excerpt about hostility as the new normal:
Hostility as social norm. Surveys show that society is becoming more uncivil, not only at workplaces but also during commutes to them, because of road rage and distracted driving. All of that has spilled into the home, triggering conflict there. Cyberbullying and subsequent online incivility has led to a hostile work environment with such consequences as absenteeism, turnover, grievances and even lawsuits.
We’ll continue to monitor Google’s efforts to remove hate speech to ascertain whether it is living up to its well-publicized commitments.