Author: Michael Bugeja

Social Media Amplifies Stereotypes

University of Missouri Athletic Department wanted to promote NCAA’s diversity week but sparked dissent at how African Americans were depicted. Think before you tweet, or suffer a similar fate.

The intent of the tweet was proactive, celebrating diversity by promoting aspirations of athletes. It had the opposite effect.

Included in the photo above were track athlete Arielle Mack, depicted with the slogan “I am an African American woman.” Ticket office employee Chad Jones-Hicks appeared above the statement, “I value equality.” The tagline for white gymnast Chelsey Christensen read “I am a future doctor”; the one for swimmer C.J. Kovac, proclaimed, “I am a future corporate financer (sic).”

The misspelling of “financier” indicates lack of fact-checking. Had someone analyzed each word of the post, perhaps disparities could have been avoided.  To be sure, Mack and Jones-Hicks have aspirations on par with Christensen and Kovac, but instead the emphasis there was on race.

Anything on internet can go viral, undermining intent and tainting an organization’s reputation. Clearly, Mizzou Athletics wanted to celebrate diversity and never meant the post to be demeaning.

According to the Washington Post, the tweet was based on a video containing this quote from Mack:  “I am an African American woman, a sister, a daughter, a volunteer and a future physical therapist.” The tagline, of course, should have been “future physical therapist.”

Perhaps one errant tagline could be forgiven; but in this case, there were three.

Sprinter Caulin Graves said, “I am a brother, uncle and best of all, I am a leader [emphasis added].” This is how Graves was depicted:

The Athletic Department apologized for the tweet with another tweet containing a video upon which the errant post was based:

The video, a professional product, has much to commend it. However, the stereotypical tweet undermined that effort.

Vincent Filak, who covered the issue in the Dynamics of Writing websitehad these recommendations:

  • Scrutinize each word of any post to guard against stereotypes.
  • Ask for a second opinion if you unsure that you are disparaging anyone.
  • Run the content by a source included in the content for his or her opinion.
  • Talk to an expert who may have insight or advice on inclusion.

Filak adds, “Even if your newsroom, your PR firm or your ad agency doesn’t have a cornucopia of diversity, you can still avoid dumb mistakes by asking for help.”

Take time with social media posts. Think critically or risk being the target of criticism.

Robotic Hiring Systems and Discrimination

Companies using machine hiring systems might delete potential employees in violation of federal law prohibiting bias based on race, disability, age and other factors. Humans must honor protected classes in interviews while AI vendors protect proprietary algorithms.

In the above video, Wall Street Journal senior correspondent Jason Bellini covers the pros and cons of robotic hiring systems. He interviews Kevin Parker, CEO of HireVue, who says his platform is more objective than traditional interviews because it removes bias from the hiring process. However, Bellini also interviews Ifeoma Ajunwa, legal scholar and labor law professor at Cornell University who challenges that view.

First, some legal background:

Employers who interview job applicants must adhere to tenets of Title VII of the Civil Rights Act of 1964, which forbids discrimination based on national origin, age, disabilities and other factors. Questions must be free of bias. For instance, an interviewer may not inquire about a candidate’s height, weight and marital status.

No doubt AI programmers have taken Title VII into account when phrasing interview questions, such as found in this tip sheet by the University of New England. But that’s not where algorithmic discrimination might occur.

That bias might be subtle, programmed into an algorithm adapted to the hiring company’s idea of an “ideal” job candidate. People might be excluded without anyone knowing if the robot is measuring facial features for age, weight, symmetry, voice tone or other distinguishing human feature. There is no real way of knowing without examining the proprietary program.

Dr. Ajunwa addresses this concern in an NPR interview:

So that’s where it gets more complicated – right? – because a job applicant could suspect that the reason they were refused a job was based on characteristics such as race or gender, and this is certainly prohibited by law. But the problem is how to prove this. So the law requires that you prove either intent to discriminate or you show a pattern of discrimination. Automated hiring platforms actually make it much harder to do either of those.

And a lot of times, the algorithms that are part of the hiring system, they are considered proprietary, meaning that they’re a trade secret. So you may not actually be able to be privy to exactly how the algorithms were programmed and also to exactly what attributes were considered. So that actually makes it quite difficult for a job applicant.

Benetech, a nonprofit whose mission is “to empower communities with software for social good” is concerned about AI hiring systems discriminating against people with disabilities. The company discusses key findings of a 2018 study titled “Expanding Employment Success for People with Disabilities“:

  • Artificial intelligence tools are increasingly widespread and vendors of these products have little understanding of their negative impact on the employment of people with disabilities.
  • The level of data collection about all of the relevant issues remains rudimentary, limiting many opportunities for improvements.
  • It is clear that employers see people with disabilities primarily through a compliance lens, and not through a business opportunity frame.

As AI hiring systems become more popular with such companies as Goldman Sachs, Unilever and Vodafone, attorneys and legislators are investigating ways to ensure algorithms are compliant with federal law.

Illinois is among the first in the nation to take on robotic hiring programs in its “Artificial Intelligence Video Interview Act,” which requires transparency and consent for any company using these algorithms.

In a post about the new law, Bloomberg Law states:

Employers increasingly are using AI-powered platforms such as Gecko, Mya, AutoView, and HireVue to streamline their recruitment processes, control costs, and recruit the best workers. Providers claim their technologies analyze facial expressions, gestures, and word choice to evaluate qualities such as honesty, reliability, and professionalism.

But the technology is also controversial. Privacy advocates contend AI interview systems may inject algorithmic bias into recruitment processes, and that AI systems could generate unfounded conclusions about applicants based on race, ethnicity, gender, disability, and other factors.

Interpersonal Divide in the Age of the Machine contains chapters that address the inherent biases of algorithmic programming. Institutional racism, subliminally associated with an organization’s target audience or bottom line, may be encoded into sophisticated robotic systems.

For instance, the Washington Post reports that a popular algorithm that identifies patients who need extra medical care “dramatically underestimates the health needs of the sickest black patients, amplifying long-standing racial disparities in medicine.”

When it comes to robotic HR systems, that’s the beginning of what awaits those the algorithm selects for employment. If technology is used to select a person for a job, one can anticipate that it will be used to monitor performance on that job.

Here’s an excerpt from Interpersonal Divide:

Machines not only monitor how employees are using devices and applications but also may be programmed to detect moods and behaviors of those employees. Machines monitor employees to an alarming degree in some companies, often under the pretext of improving performance. Stress is measured, too, although usually in a negative light. Examples include tracking a worker’s Internet and social media use; tapping their phones, emails and texts; measuring keystroke speed and accuracy; deploying video surveillance; and embedding chips in company badges to evaluate whereabouts, posture and voice tone.

Cyberlaw needs to catch up with federal labor law, especially when AI is used in hiring and firing decisions. As Bloomberg Law notes in its report, some labor law attorneys believe algorithmic systems could unintentionally screen out protected classes. One attorney cited in the above post suggests employers should test robotic systems against a pool of candidates for potential bias.

Fakes, Hacks, Hoaxes and Tall Tales: The State of U.S. Media in the Post-Truth Era

“Fakes, Hacks, Hoaxes and Tall Tales: The State of U.S. Media in the Post-Truth Era”–has been posted by Commonwealth Centre for Connected Learning at the University of Malta. Thanks to Alex Grech for his leadership and to the internationally known speakers who presented at the Post Truth Conference Oct. 10-11 in Valletta.

You can download the paper at the link below. My abstract:

“Since the 2016 presidential election in the United States, politics and journalism have combined to undermine reality to such extent that facts are alternative, and truth is not truth. All too often, social media are complicit in the obfuscation. This paper investigates that charge, exploring the role of 24/7 ubiquitous online access in creating a culture of lies, exposing inconvenient truths about American politics and news outlets in the post-truth era.”

Can long-form journalism bring readers back by learning from the literary essay? 

In this abbreviated post, you can view how consumer technology has slowly eroded the audience for long-form or slow journalism. Below you’ll find a link to the Online Journalism Blog where we share 17 rhetorical concepts that can mitigate the smartphone effect.

In 2016 a Pew report looked at how readers interacted with over 74,000 articles on their mobile phones. It concluded that long-form reporting was holding its own despite the shift to mobile, boasting a higher engagement rate (123 seconds compared with 57.1 for short-form stories) and the same number of visits:

“While 123 seconds – or just over two minutes – may not seem long, and afar cry from the idealized vision of citizens settling in with the morning newspaper, two minutes is far longer than most local television news stories today.”

Long-form articles get twice the engaged time and about the same number of visitors on mobile

Tweaking the concept of long-form

But buried in the report were some problems: only 3 percent of long-form and 4 percent of short-form news returned to the content once they left it — and both types of articles had brief lifespans after content was posted, with interaction after three days dropping by 89 percent for short-form and 83 percent for long-form.

Moreover, an “overwhelming majority of both long-form readers (72%) and short-form readers (79%) view just one article on a given site over the course of a month on their cellphone.”

Long-form content appeared to be performing better than short-form content on most measures — but it was a pretty low bar.

If the genre is to survive in the current digital environment the prevailing concept of long-form journalism, it seems, still needs tweaking so that readers read more stories, return to them more frequently in order to finish them, and engage for even longer periods.

To view the 17 rhetorical terms, visit Online Journalism Blog:

Media Ethics: Behind The Carson King Saga


 Interview begins at the 6:50 mark

Carson King, 24, has raised over $1 million for charity and a wave of controversy after going viral with a sign requesting beer money at the Cyclone-Hawkeye game on Sept. 14. Media Ethicist and Iowa State University Professor Michael Bugega joins this ‘News Buzz’ edition of River to River to give his perspective on the investigation of Carson’s past tweets by The Des Moines Register and the following backlash.

Carson King Lesson More about Internet than Ethics

Carson King held up a beer sign during a mega-media sporting event, and his life changed overnight. He rode the media blitz from icon to apology. In the age of the machine, the same thing can happen to anyone at the right time in the wrong viral place.

A 24-year-old man held up a sign asking for beer money at the widely televised ISU-Iowa ESPN Game Day media extravaganza. It was a thunderous day, with multiple delays at Jack Trice stadium. For many tailgaters, beer was a good remedy to wait out the weather.

ESPN was crawling with media trying to create content before, during and after downpours, and King appeared in a short segment.

Then the Internet happened, and money started flowing as freely as tap into King’s online Venmo account.

As the funds grew to about $600, King did the right thing: He said he would give that money to University of Iowa Children’s Hospital.

Seeing endorsement opportunities, as well as compassion in a thoughtful young man, Busch Light and Venmo promised to match whatever funds King raised.

He ended up raising a lot, more than $1 million.

Mediated Brands 

King’s own brand metamorphosed swiftly on Internet. In the course of a few weeks, he became a celebrity–“Iowa Legend”–with his likeness on a beer can.


His story was local as well as national. The Des Moines Register would do a “profile,” a genre that explores the background and character of a newsmaker.

According to the journalism website, Poynter, “The subjects of profiles could be people who are on the brink of change, unusual people, people in the community others may have wondered about but never bothered to notice. …”

That post was written in 2002, and the world changed since then, although many journalists as well as news consumers don’t quite realize how much. Internet is immediate, global, and more powerful than anyone thinks … until they have a Carson King experience.

This is how his story morphed from compassion to apology.

In a routine background check, the Register did what employers, college admissions officers, parents and yes, college students do: It looked at King’s past social media posts.

There were two racist ones posted when he was 16. The reporter asked him about them, and King didn’t immediately remember them. Internet remembered them, and now the world would probably see them.

So King did what many public relations practitioners would have advised: Get in front of the story.

He composed this statement:

Then he appeared on WHOtv.

Only he did it before the Register went to press.

The newspaper had planned to reference the tweets in a few sentences at the end of the profile, which largely would have focused on his positive impact. (Here is the published piece.) Some might say, had King not got in front of the story, those sentences would have been dismissed or not even read in a social media era where users typically are too distracted to read to the end of any story online or in print.

That also is an Internet effect.

Media Ethics

A post by Register Editor Carol Hunter explained what was happening behind the scenes. (We don’t know if the reporter found the offensive tweets and went to an editor for advice, or whether he contacted King directly, setting off a chain reaction.) Debates arose in the newsroom with pros and cons and provocative questions. (Note:  Also, a second Hunter follow-up was posted on 9/26/19 noting policy changes. The reporter in question no longer works at the Register.)

Here is an excerpt from Hunter’s initial explanation:

Should that material be included in the profile at all? The jokes were highly inappropriate and were public posts. Shouldn’t that be acknowledged to all the people who had donated money to King’s cause or were planning to do so?

The counter arguments: The tweets were posted seven years ago, when King was 16. And he was remorseful. Should we chalk up the posts to a youthful mistake and omit the information?

As Hunter acknowledged in her post, reasonable people could disagree with the decision to question King about the tweets and to include them in the story.

That’s a media ethics question in the grey area in which the Register found itself. The backlash was swift and severe, largely focusing on the newspaper as symbol of demonizing media. However, as Hunter knew, there was no clear answer, given the circumstances: only choices and consequences.

Ramifications were immediate. Anheuser-Busch terminated its relationship with King and issued this statement:

Carson King had multiple social media posts that do not align with our values as a brand or as a company and we will have no further association with him. We are honoring our commitment by donating more than $350,000 to the University of Iowa Hospitals and Clinics.

From a media ethics perspective, we might focus on two standards in the decision to withhold or publish information about King’s tweets:


  • Fairness: The tweets had little to do with the story about charity and compassion.
  • Do No Harm: Mentioning the tweets would cause harm to the primary beneficiary: Children’s Hospital.


  • Transparency: The tweets were public.
  • Public Information: Donors had a right to know.

A media ethicist might have advised the Register to omit past juvenile social media posts in profiles of adults unless those posts were indisputably associated with the topic of a story. A teen tweet about violence in a story about violence, for instance, would fall in that category.  Conversely, the newspaper could have done a positive profile about King without mentioning the tweets and scheduled a follow-up story at a later date, perhaps with a spin on how character develops with education and experience.

But that was preempted, too. Something or someone triggered a series of events, prompting King to get out in front of the story.

Oddity and Odyssey  

The Carson King episode was a journalism anomaly.  Several coincidences occurred that contributed to this story. It rained. It was Game Day. ESPN and its audience were bored. Corporate branders saw opportunities in a photogenic man who matched its target demographics and psychographics. And, of course, his last name happened to be “King,” as in “King of Beers,” the Budweiser logo.

And then something happened that showed everyone just how powerful social media can be when we fail to practice discretion. People began scouring the reporter’s past tweets and found offensive ones there, too.

Now the story was national. The Washington Post picked it up in a article titled “Iowa reporter who found a viral star’s racist tweets slammed when critics find his own offensive posts.

The Post published this Twitter screenshot.

But that’s not the story either.

Concerning Carson King, many corporate influencers have said, done or disseminated outrageous, hideous, hurtful, stereotypical, profane or slanderous tweets and posts. But there is a key difference between them and King. They deleted them.

King never thought he would be a national celebrity. So he didn’t delete.

In his statement, he writes:

It was just 10 days ago that I was a guy in the crowd holding a sign looking for beer money on ESPN Game Day. Since then – so much has happened. Especially when I announced all of the money would be donated to the Stead Family Children’s Hospital in Iowa City. Thousands of people have donated and today the account is at 1.14 million dollars. Much of this has happened thanks to social media – it has the power to bring people together for a common good.

It also can make your life very public.

Celebrity icon Andy Warhol prophesied in 1968 that everyone in the future “will be world-famous for 15 minutes.” In the Internet age that phrase might be “world-famous and then infamous in 15 minutes.”

King’s fame happened because of omnipresent Internet responsible for more than a million dollars in charitable giving as well as to his rapid fall from corporate grace.

Convention and Intervention

Despite the complexities and anomalies of the Carson King saga, the audience recognized the familiar journalism pattern: Elevate someone to celebrity status overnight, then cut the person down and find a scapegoat. Tag the Register for that.

An online petition appeared on, demanding the Register issue a front-page apology to Carson King. Its goal is 200,000 digital signatures, and at this writing, some 157,761 had done just that. (In fact, in the short span of composing this paragraph, more than a dozen more signatures appeared.)

Gov. Kim Reynolds has proclaimed Saturday, Sept. 28, “Carson King Day in Iowa.” You can read the proclamation here.

The celebration is apt in many ways, with one caveat. King’s juvenile offensive tweets must have been especially hurtful for any peer or person of color reading them. To be sure, teens say all manner of offensive things, and many later realize the errors of their derogatory ways. Often, teachers or role models will have intervened to explain the history and hurt of racism, treating infractions as teachable moments about the importance of inclusion.

King said as much in his statement:

Thankfully, high school kids grow up and hopefully become responsible and caring adults. I think my feelings are better summed up by a post from just 3 years ago:

“Until we as a people learn that racism and hate are learned behaviors, we won’t get rid of it. Tolerance towards others is the first step.” — July 8, 2016

Education is the instructor in cases like this, and that also applies to Internet.

Interpersonal Divide continues to advocate for media and technology literacy, as early as middle school and continuing through college. We all have to confront the new digital realities shaping social norms because of the speed and viral propensity of the web.

That lesson applies to journalism. Withhold today what you cannot decide for tomorrow. Re-evaluate ethical standards established in the age of print and decide if they still apply in the age of the machine.

Courage of Greta Thunberg: Social Media Propels Message

Swedish teen activist Greta Thunberg displays moral courage addressing climate change at the United Nations. She used social media to spread her views. Trolls used it against her, targeting her Asperger’s diagnosis. Yet she persists with a powerful, provocative message.

Some say her Asperger’s diagnosis allows her to speak boldly. Some say it’s just plain courage, with a message delivered at the right time and place through the proper platform. In any case, Greta Thunberg’s use of social media has become the digital megaphone that inspires thousands. Thunberg uses Internet in the manner that many of us envisioned around the time of her birth: bringing to the world a global message of proactive change.

In 2004, the Pew Research Center surveyed experts in The Future of the Internet I about how the worldwide access would be used in the current day. Some of the predictions were spot on, including major cyber attacks on the grid, Internet integrated seamlessly into physical environs, and increased levels of government surveillance.

Here’s one of the fails. The majority of experts believed that more information would lead to higher levels of social awareness rather than political bias.

Just 32% of these experts agreed that people would use the internet to support their political biases and filter out information that disagrees with their views. Half the respondents disagreed with or disputed that prediction.

Thunberg’s rise as an environmental icon has to do with interpersonal as well as digital protests. Her personal narrative began in 2018 when she left school to protest outside the Swedish parliament, demanding that politicians act to sustain the environment. She was photographed, blogged and tweeted about on social media, inspiring students in her own country and Europe to participate in similar protests.

Now she has taken her message to the United States, a country that has withdrawn from participation in the 2015 Paris Agreement on climate change mitigation.

Thunberg’s courage also includes ignoring repeated attacks on her person. CNN reported that President Trump mocked her after her UN speech, tweeting: “”She seems like a very happy young girl looking forward to a bright and wonderful future. So nice to see!”

Internet trolls target her Asperger’s condition in coordinated political and personal attempts to undermine her message.

While typical teens might have yielded to such attacks, Thunberg responded with indifference and insight:

“When haters go after your looks and differences, it means they have nowhere left to go. And then you know you’re winning!  I have Asperger’s and that means I’m sometimes a bit different from the norm. And – given the right circumstances -being different is a superpower.”

Interpersonal Divide often discusses the biases and banalities of social media. However, the book also documents teens who have used social media as Thunberg does, to engage and teach. Here’s an excerpt:

In 2016, the 12-year-old singer-songwriter Grace Vanderwaal learned how to play the ukulele by watching YouTube videos and went on to win a million dollars on the competitive talent show “America’s Got Talent.” Online content also has advanced careers of budding scientists. In 2015, 17-year-old Olivia Hallisey helped solve a refrigeration issue in Africa associated with portable diagnostic tests for Ebola by reading online about a silk fiber derivative that keeps proteins stable without requiring cooling temperatures.

Whether arts or sciences, Internet can inspire innovation and trigger social change. It also has the power to create overnight global icons with powerful messages, as in the case of Thunberg.

As Interpersonal Divide also notes, however, “there is one critical component that can cultivate astute use of online resources, and that is parental, peer and teacher guidance on how to access information from reliable sources and avoid dangers from untrustworthy ones.”

Thunberg’s message is empowered by reliable sources on climate change, informing everyone about the need to take action to repair and sustain the environment.