Revisiting ProPublica’s Report on Algorithmic Hate Speech

Last year ProPublica investigated Facebook’s hate speech algorithms learning that moderators were being taught to elevate “white men” over “black children” as a protected class. It’s worth revisiting to show how the complexities of the English language confound machine logic.

Machines correlate without causation. That’s a key concept in Interpersonal Divide’s critique of “artificial intelligence.” Technical systems are adept at answering 4 of the 5 “Ws” and H of mass communication: Who, What, When, Where and How.

Those are the only qualifiers you need to make a sale. Social media, especially Facebook, sell to and surveil us simultaneously whenever we feed its algorithms. If we receive a new pair of shoes in the mail for our birthday, and we display them, thanking Grandma, the machine knows who got what gift when and how from where.  That’s the point. That’s social networks create value via consumer narratives.

Interpersonal Divide cites computer scientist and author Jaron Lanier’s explanation. Machines with copious amounts of data may be able to discern odd commercial truths: People with bushy eyebrows who like purple toadstools in spring might hanker for hot sauce on mashed potatoes in autumn. That would enable a hot sauce vendor to place a link in front of bushy-eyebrowed Facebookers posting toadstool photos, increasing the chance of a sale, “and no one need ever know why.”[1]

The narrative knows:

  • Who: people with bushy eyebrows.
  • What: hot sauce
  • When: autumn
  • Where: Facebook IP address
  • How: on mashed potatoes

No one ever need know Why. A sale is a sale is a sale.

When it comes to Facebook’s algorithm, however, we do know why “White Men” outrank “black children” according to machine logic. The algorithm, which purportedly has been tweaked since the ProPublica report, bases hate speech on what seems at first blush a logical foundation. If a suspected hate message targets a protected class, such as race and gender (white men), that trumps a class modified by subset such as age (black children).

Of course, the English language doesn’t work this way, especially since one word may have multiple meanings that change based on its position in a sentence. Rearrange words of this sentence–“Stop drinking that this instant; tea is better for you!“–and you get several variations, such as “Better stop drinking that; this instant tea is for you.”

As the ProPublica noted, Facebook allowed U.S. Congressman Clay Higgins to threaten “radicalized” Muslims with this post: “Hunt them, identify them, and kill them. Kill them all. For the sake of all that is good and righteous. Kill them all.”

However Facebook removed this post from Boston poet and Black Lives Matter activist Didi Delgado: “All white people are racist. Start from this reference point, or you’ve already failed.

Why? Human monitors trained by machine to think like one followed the algorithmic rule that “white people” + attack (racist) trumped “radicalized” (subset) Muslims. Everyone seemed to miss “hunt” and “kill them all.”

This illustration depicts how that could have happened.

Facebook Bias

Interpersonal Divide asks readers to understand technology from a programming rather than consumer perspective so as to explain “why” things happen in the age of the machine.

This is one small incident that indicates a larger issue of machines correlating on biased data with flawed computer logic. You can read more about Facebook rules by visiting these sites referenced in this report:

[1] Jaron Lanier, Who Owns the Future (New York: Simon and Schuster, 2013), p. 115.


Kavanaugh-Ford Hoaxes Appeal to Base–Instinct, That Is


Over the weekend in Facebook and Twitter feeds, Americans–not Russians–perpetuated false claims seeking to play to their “base,” a word whose first meaning is defined as “the lowest part,” as in base instinct.

The goal of partisan trolls was to debase the names and reputations of assault survivor Christine Blasey Ford and Supreme Court nominee Brett Kavanaugh. Sensational claims about both have been shown to be baseless.

The distressing news, however, was that these false reports–1982 photos of a drunk Kavanaugh and a series of photos depicting Ford as a Democratic operative–were believed by many, flooding the internet and spreading to friends listed in social media accounts.

Sadly, lies have been shown to travel faster and farther than truth, according to Slate.

Thankfully, has been able to post refutations almost as soon as the false accounts were posted.

Concerning the Kavanaugh photo, it stated:

While the picture on the right is, in fact, Brett Kavanaugh, the picture of the passed-out man on the left is a Getty Images stock photo titled “portrait of a young man asleep on the couch after drinking too much beer” that was created long after 1982.

Concerning the Ford photo, it stated:

This photograph was taken on 12 November 2016 at a protest against President Trump in New York City by photographer Christopher Penler. The image is available on a variety of stock photographwebsites, where it is consistently presented as an image of an anonymous woman with a “Not My President” sign. It wasn’t until Christine Blasey Ford came forward with an allegation of sexual assault against Supreme Court nominee Brett Kavanaugh in September 2018 that the picture started circulating with Ford’s name attached to it.

It is important to recognize that hoaxes play on the deeply held beliefs, fears, convictions and desires of the mass media audience. In controversial political news, such as Ford’s allegation of sexual violence, conditions were rife for fake news and hoaxes.

For the record, here is the Sept. 27 transcript of the Kavanaugh hearing, supplied by the Washington Post.

Interpersonal Divide in the Age of the Machine cautions readers about Internet trolls and how they influence public perception. Here’s an excerpt:

Hoaxes. Hacks. Stunts. Pranks. Fraud. Counterfeits. Conspiracy theories. Altered photographs. Doctored records. Viral videos. Facts died in the process. “The era of the fact is coming to an end,” writes Harvard historian Jill Lepore in the New Yorker, creating mayhem, “not least because the collection and weighing of facts require investigation, discernment, and judgment, while the collection and analysis of data are outsourced to machines.”

The loss of fact has led to other interpersonal losses. Thus, it is important for everyone who uses social media to fact-check claims on or traditional news sites.

Weaponizing Wikipedia: GOP Senators Doxed

Doxing is the practice of sharing private information about an individual via use of “publicly available databases and social media websites, hacking, and social engineering”–Wikipedia

As the world watched political and personal strategies play out in the Sept. 27, 2018 Supreme Court hearings, another digital strategy was being launched against GOP senators: doxing.

According to The Washington Post, Lindsey Graham (R-S.C.) was one of three Republicans whose phone numbers and home addresses were added to Wikipedia biography pages. This occurred while Graham was questioning Supreme Court nominee Bret Kavanaugh.

Utah Sens. Mike Lee (R) and Orrin G. Hatch (R) were similarly doxed.

The Post published this screenshot redacting private information about Hatch.

But this was not the end of the Wikipedia incident. After the private information was removed from its website,  addresses and phone numbers were circulated again on Twitter via the account @congressedits, which The Post described as “a social media ‘accountability bot’ that tweets edits to the online encyclopedia made from IP addresses assigned to the U.S. Capitol,” taking a screenshot and sending that to 65,000 followers.

Type of site
Twitter account
Available in English
Launched July 8, 2014; 4 years ago
Current status Online

Wikipedia states that @congressedits tweets changes made by “anyone using a computer on the U.S. Capitol complex’s computer network, including both staff of U.S. elected representatives and senators as well as visitors such as journalists, constituents, tourists, and lobbyists.”

While the news media consider @congressedits a digital watchdog, in as much as reporters instantly see what House and Senate aides are posting about their bosses, doxing remains a semi-anonymous weapon in the digital arsenal of partisan politics. Typically, content such as this can be traced to an IP address, indicating where the doxing took place (in this case, from a House computer).

Tracking the IP address may narrow the number of suspects, but plausible deniability is an alibi in as much as staffers can claim, “It wasn’t me.”

That’s partly true. It was the technology.

Once again, this incident shows the nature of technology. Purpose and programming–meant for transparency and public access–were weaponsized during live testimony in a historic proceeding.

Interpersonal Divide in the Age of the Machine devotes several chapters to the nature of technology, i.e. that of a scorpion. It is what it is.  Here’s a citation:

The French-Maltese philosopher Jacques Ellul believed that technology is “a self-determining organism or end in itself whose autonomy transformed centuries’ old systems while being scarcely modified in its own features.”[1] In simple terms, that means that technology changes everything it touches without changing much itself. Introduce technology into the economy, and the economy is all about technology. Introduce it into the home, and home life is about the technology. Introduce it into school systems, and education is about the technology. Introduce it into employment, and you have the same effect.

Introduce it into an “accountability” bot such as @congressedits, and the bot no longer is about accountability but doxing to shape public opinion according to partisan politics.

[1] Jacques Ellul, “The Autonomy of the Technological Phenomenon, in Scharff, Robert C. & V. Dusek, (eds.), Philosophy of Technology: The Technological Condition, Mass: Blackwell, 2003, p. 346.

When Public Space Becomes Unsafe

With a spate of recent daylight murders garnering national attention–a female jogger and golfer in Iowa and another jogger in New York City–one wonders whether the concept of “Take Back the Night,” an effort to to end violence, especially against women, should be revised to “Take Back the Day.”

A recent Gallup poll shows close to 40 percent of adults–45 percent women, 27 percent men–believe the immediate area around their home may be unsafe to walk alone at night.

Daylight violence is disturbing because of its brazen disregard for witnesses. According to the U.S. Department of Justice, “the number of violent crimes committed by adults increases hourly from 6 a.m. through the afternoon and evening hours, peaks at 10 p.m., and then drops to a low point at 6 a.m.”

Violent crimes by juveniles hit a high point between 3 p.m. and 4 p.m., the hour immediately following the end of the school day.

Many variables affect our perception of safety. As the website “Safe Communities” posits, factors include life experiences, beliefs, type of community, age, socioeconomic status, type of job and employment status, to name a few.

For insight, we might look to the philosophy of social activist Parker J. Palmer who wrote that the most public place is the street where people send a message through the channel of their bodies in real place, acknowledging that “we occupy the same territory, belong to the same human community.”[1]

Cited in Interpersonal Divide, Parker discusses how suburban sprawl changed our notion of community. For instance, in the 1980s, mega malls replaced Main Street, which later was deemed unsafe. Then malls were deemed unsafe.

In his 1981 book, The Company of Strangers, Parker made this prophetic statement:

When people perceive real habitat to be unsafe, they withdraw from it, and it becomes unsafe. “Space is kept secure not primarily by good lighting or police power but by the presence of a healthy public life.”[2]

Perhaps it is time for society to assess whether increasing use of technology has played a role in the withdrawal from community as Parker had envisioned it, a communal and, in many ways, vibrant space. If we opt to spend more time in virtual rather than real habitat, even as we walk the digital streets, we may lose sight of what it means to occupy the same territory with neighbors and our moral obligation to nurture and monitor our collective interactions there.

It is also important to note that use of technology may mitigate risk. New digital products–wearables like Athena and Safer Pro–have been developed to send emergency alerts with GPS tracking to friends and loved ones.

[1] Parker J. Palmer, The Company of Strangers (New York: Crossroad, 1981), p. 39.

[2] Palmer, p. 48.

Interpersonal Divide Favorably Reviewed in International Journal of Communication

The following is from the introduction and conclusion of the review by Min Wang (International Journal of Communication 12 [2018], Book Review 3776–3779 1932–8036/2018BKR0009)

Interpersonal Divide in the Age of the Machine … will likely appeal to students and scholars in a great variety of disciplines, including media studies, communication ethics, interpersonal communication, media literacy, psychology, sociology, data science, information technology, and science and technology studies.

In plain language and jargon-free prose, Bugeja fulfills his goal to address the impact of media and technology on human communities, universal principles, cultural values, and interpersonal relationships. His creative writing style makes Interpersonal Divide in the Age of the Machine accessible to multidisciplinary readers who wish to explore how media and technology, particularly big data and artificial intelligence, structure our lives. The critically-reviewed literature and abundant evidence support the viewpoints, arguments, and predictions in the book in an eloquent manner. The well-designed end-of-chapter exercises are directed interactively at students who can report the results of their exercises and experiments through discussion and debate, providing an outlet to inspire ideas, dialogue, and introspection.

Digital Crazytown and the Anonymous Memo

Unethical media and politics have combined to create “Digital Crazytown.”

On Sept. 5, the New York Times published an anonymous memo by a senior official in the Trump Administration who called the president so amoral that his “appointees have vowed to do what we can to preserve our democratic institutions while thwarting Mr. Trump’s more misguided impulses until he is out of office.”

The President is so irate that he believes the source of the memo may have committed “treason,” prompting dozens of his top officials to claim they were not the author.

One of those was Chief of Staff John Kelly, cited in Bob Woodward’s new book Fear as stating:

“He’s an idiot. It’s pointless to try to convince him of anything. He’s gone off the rails. We’re in Crazytown”

Crazytown is an apt phrase describing the milieu in Washington.

As author of Interpersonal Divide in the Age of the Machine  and Living Media EthicsI can comment on two lingering questions concerning this issue: (a) Can technology help identify who the author is, and (b) Should the Times have published an anonymous op-ed? 

I have one more qualification: My Ph.D. in English and specialties in Elizabethan playwrights like Shakespeare and Ben Jonson.

In 2005, I published a piece in Inside Higher Ed in which I used my textual editing skills–developed to discern “fair” and “foul” copies of plays–to help identify a professor who kept leaving unflattering anonymous notes in the mailboxes of colleagues. Here’s what I wrote in the essay titled “Such Stuff As Footnotes Are Made On“:

You see, over time, each of us develops a distinct textual signature. We may be given to odd phrases, locutions and colloquialisms, such as “in regards to” or “clearly, it seems” or “in cahoots with,” as in, “In regards to his annual review, clearly, it seems, John Doe is in cahoots with the Dean.” Collect enough writing samples, and you can identify the likely source of such a sentence, just as you can discern a fair from foul excerpt of a Shakespearean play.

In this case, I took awkward locutions in the anonymous notes and ran them through thousands of emails on the university server. Bingo!

This is a popular application, the most famous of which concerned Shakespeare Professor Don Foster at Vassar College, known for outing journalist Joe Klein as the anonymous author of the 1996 book Primary Colors. 

A quick analysis of text in the anonymous memo concerns the use of the word “lodestar”–or navigation star, typically Polaris, used to guide a ship–by Vice President Mike Pence. People quickly glommed on to that, as in this video:

Not so fast. First of all, Pence issued a fierce denial that he was the author, stating in the Times:

“Anyone who would write an anonymous editorial smearing this president who’s provided extraordinary leadership for this country should not be working for this administration. They ought to do the honorable thing and they ought to resign.”

We expect denials, of course. However, there is a big difference these days in detecting linguistic fingerprints compared to when Foster and I did it years ago. Pence could have been set up by someone so technologically savvy that use of that word “lodestar” was deliberate.

That’s how digitally manipulative we have become.

Nonetheless, tech applications using machine intelligence have been used to detect authorship for the past decade. Case in point: When Harry Potter author JK Rowling wrote the novel The Cuckoo’s Calling under the pen name of Robert Galbraith, readers noticed linguistic similarities. An application was applied, and Bingo!–Rowling was identified.

Use of AI to detect the author of the memo typically can work around planted words like “lodestar” and provide statistical probabilities concerning who wrote the memo.

The next question is whether the Times should have published it. Here’s how the newspaper defended its decision:

The Times is taking the rare step of publishing an anonymous Op-Ed essay. We have done so at the request of the author, a senior official in the Trump administration whose identity is known to us and whose job would be jeopardized by its disclosure. We believe publishing this essay anonymously is the only way to deliver an important perspective to our readers.

Well, wait a minute. In an era of fake news, including ones promulgated to spoof the public and its opinion–such as this one about Michael Jordan resigning from the Nike Board because of the ad featuring Colin Kaepernick–journalism integrity “trumps” sensationalism.  So no, the Times should not have published the anonymous op-ed unless–and this is a BIG unless–someone so high in the administration wrote it that editors just could not resist the temptation to violate its own values. Here’s an excerpt about anonymous sources from the Times ethics code:

Because our voice is loud and far-reaching, The Times recognizes an ethical responsibility to correct all its factual errors, large and small. … We observe the Newsroom Integrity Statement, promulgated in 1999, which deals with such rudimentary professional practices as the importance of checking facts, the exactness of quotations, the integrity of photographs and our distaste for anonymous sourcing [my italics].

Now the Times faces another ethical dilemma. The Opinion Section operates apart from the News Division. Will one investigate the other? President Trump has suggested just that in this tweet:

That’s not as far-fetched as it might seem, and that statement is testament to just how crazy journalism along with politics has become in digital Crazytown.

The forthcoming edition of Living Media Ethics has chapters on manipulation, temptation and ethics codes, including anonymous sourcing and its dangers. Interpersonal Divide includes chapters on artificial intelligence and how it is being used in datamining and surveillance.

To Share or Not to Share: Racist Robocalls in Ethics and Technology Courses

I am the author of two books that foresaw how technology would be used to foment hate–Interpersonal Divide and Living Media Ethics. I am also a professor who teaches media ethics and technology and social change at Iowa State University. In the past, I could share distressing racial content with an appropriate trigger warning. But what to do when the content is so reprehensible that I as their instructor wish I had not viewed and heard it?

Here’s how I decided to handle it:

1. Do not share links. Rather than direct my students to sites containing the vile messages, said to be the work of white supremacists, I provided the above screenshot, which appears when “racist robocalls” is typed into Google (2 September 2018).

2. Do share multiple warnings. I have about two dozen African-American and Latinx students in my classes. Content of these robocalls affects them in a despicable manner. There’s another risk: One of the calls concerns the murder of Mollie Tibbetts, a young woman from our sister school, the University of Iowa. There’s no telling if someone in my classes knows her or her family.

3. Provide summaries. Those who wish to view both stories based solely on the screenshot above will have to voluntarily type “racist robocalls” and hit the links from their own smartphones, tablets or computers. They will have done this voluntarily after being forewarned by their teacher.

For discussion purposes, I summarized verbatim from mainstream media:

An assertion by a white gubernatorial candidate that Florida voters can’t afford to “monkey this up” by voting for his black opponent was widely viewed as a “dog whistle” to rally racists. If it were a dog whistle — and GOP candidate Ron DeSantis denies any racial intent against Democrat Andrew Gillum — then a jungle music-scored robo-call that has circulated in Florida is more akin to a bullhorn–“‘We Negroes’ robocall is an attempt to ‘weaponize race’ in Florida campaign, Gillum warns.” (The Washington Post, 9/2/2018)

An out-of-state white supremacy group has claimed responsibility for disturbing neo-Nazi robocalls using the murder of Mollie Tibbetts to push a violent, racist message in Iowa. Latino leaders said the calls are frightening the community and causing serious anxiety and fear in central Iowa. The 1 1/2-minute robocall begins by talking about Tibbetts’ death, saying she was “stabbed to death by an invader from Mexico. It goes on to call for the deaths of all 58 million Latinos in the United States–“Alarming neo-Nazi robocall hits central Iowa. (KCCI-Des Moines, 9/2/2018)

I feel good about this practice, even though I know my journalism colleagues may disagree, believing I am sanitizing the world. I also know most students may have no problem viewing and hearing the content. (Some might, and those are ones I am concerned about here.)

Educators may understand my method, focusing on the core concept in this exercise: Technology, once touted as our best hope to build inclusive communities, has been weaponized to destroy that idealistic goal.

Media ethics does call for judicious use of hate messages, especially when content emanates from a white supremacy group. You don’t want to promote such groups, even though summaries such as those above do provide a modicum of public exposure.

But good teaching entails understanding how students learn. The point is for both classes to recognize that technology is being used to strike fear in the populace. My lesson plan does not call for igniting everyone’s emotions to such extent that they leave class in fear of or angry about the world.

It is that fear, by the way, that those robocalls intend to trigger. Not so this time.