Author: Michael Bugeja

AI is efficient; language, not so much: Why we worry about robots

Facebook chatbots were asked to trade online like people do; but the test was shut down after the bots created and chatted in their own language. The linguistic experiment failed because English is inefficient. That’s how AI disasters may occur.

In 2017, Facebook programmed chatbots to make a trade the way that people do online, assigning values to hats, balls and books; but the engineers failed to set one programming requirement: use everyday English.

The English language is effective but also inefficient. Its structure, grammar, syntax and meaning are often nonsensical as many of us learned in grammar school (apt name), trying to spell words with “i” after “e.” We were told “except after ‘c.'” Uh huh. Some experts calculate that 44 words actually follow the rule with 923 that do not.

That’s hardly the end of illogical English. It’s positional, meaning words change meaning depending on the place in a sentence. Take this 10-word example: “Stop drinking this instant; that tea is better for you.” Here’s a short sample of several meanings, oral and verbal, just by changing positions of words.

English is far more complicated than that. In fact, we do not even know how many words exist in it. According to Oxford Dictionaries,

There is no single sensible answer to this question. It’s impossible to count the number of words in a language, because it’s so hard to decide what actually counts as a word. Is dog one word, or two (a noun meaning ‘a kind of animal’, and a verb meaning ‘to follow persistently’)? If we count it as two, then do we count inflections separately too (e.g. dogs = plural noun, dogs= present tense of the verb). Is dog-tired a word, or just two other words joined together? Is hot dog really two words, since it might also be written as hot-dog or even hotdog?

The Second Edition of the 20-volume Oxford English Dictionary has entries for “171,476 words in current use, and 47,156 obsolete words” along with 9,500 derivative words.

Machines think, “Why so many words to say the same thing? And then the same word to mean many things? Why assign different meanings beyond 0 and 1? People are inefficient. Let’s create our own language.”

That’s exactly what the bots did.

In the failed experiment, Facebook bots were dubbed Bob and Alice, names engineers use as placeholders. The Independent writes that the bots operated “on the machine value of efficiency.”  As English impeded trade-making, Bob and Alice spoke to each other in code–literally and figuratively. Here’s the transcript:

Bob: i can i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i i can i i i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i . . . . . . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i i i i i everything else . . . . . . . . . . . . . .

Alice: balls have 0 to me to me to me to me to me to me to me to me to

Bob: you i i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

It appears as if the bots were emphasizing the importance of self (to me to me to me). That’s understandable. Ever since the iPhone 4 front-facing camera, we have been programming machines to think the individual is worth more than the collective–sorry Star Trek–because sales are based on a person’s algorithmic profile.

People are little nodes with bad language.

Engineers will have to focus on language and its myriad shades of meaning or invent a more efficient AI vocabulary apart from computer code. Here’s the thing: At Facebook and other social media, people monitor posts, relying on machines to filter ominous words and threats. In the future, we will monitor machines to see if they are using lingo against humanity.

Interpersonal Divide in the Age of the Machine dedicates a chapter to artificial intelligence and robotics, including a section on machines developing their own codes apart from human moral ones, leading to the dreaded singularity when super-intelligent machines decide that people are inefficient.

Iowa State alumnus and former BuzzFeed writer discusses layoffs

Digital advertising increasingly is going to mega-tech companies like Google and Facebook, causing a ripple effect in fact-based journalism, with hundreds laid off last week. In this post, Tyler Kingkade–recently let-go BuzzFeed writer–has an optimistic outlook about the future of journalism.

A New York Times op-ed, “Why the Latest Layoffs Are Devastating to Democracy,” discusses recent layoffs across media platforms, including two hundred staff and journalists at BuzzFeed as well as 800 from Yahoo, Huffington Post, TechCrunch and other outlets. Gannett reportedly is letting go an additional 400.

According to the piece, a chief concern involves digital advertising going to media monopolies such as Google and Facebook:

The cause of each company’s troubles may be distinct, but collectively the blood bath points to the same underlying market pathology: the inability of the digital advertising business to make much meaningful room for anyone but monopolistic tech giants. The cause of each company’s troubles may be distinct, but collectively the blood bath points to the same underlying market pathology: the inability of the digital advertising business to make much meaningful room for anyone but monopolistic tech giants.

Tyler Kingkade, an outstanding alumnus from Iowa State’s journalism school, who worked for the Huffington Post and most recently BuzzFeed, was one of the employees who received a pink slip.

He sent this message to media ethics and tech/social change classes at his alma mater:

“It’s admittedly concerning if BuzzFeed had to downsize. Particularly in our News division, they laid off reporters who were in the process of turning our work into documentaries, which was a new avenue of making money. However, the ones laid off have gotten a lot of people flagging job openings for us, or asking to meet about giving us jobs. Even the LA Times, which has reduced its staff, is now building back up. The Seattle Times oddly enough is on a hiring spree.

“Journalism is a field that does not grow – there is never going to be a boom time for us. But I don’t believe it will ever dry up. People will figure out a stable model, whether it’s through selling story rights to be TV shows and movies (see Dirty John for a recent example) or subscription or a nonprofit donor model like ProPublica. I bet there will soon be something that gives you a bundle of subscriptions, in the same way that Spotify got people to finally stop illegally downloading music and pay for it again.

“The currents against you in media will always be strong; young journalists will just need to learn how to be strong enough to swim against it.”

Kingkade, based in New York, focuses on covering civil rights, crime, sexual harassment and assault, and the treatment of teens in vulnerable and traumatic situations. His work has earned multiple awards, recognition from national nonprofits, pushed companies and prosecutors to take action, and caused inquiries by universities and lawmakers. Most recently he was a National Reporter at BuzzFeed News in New York … and is currently looking for his next assignment.

Foxconn Reconsiders $10 Billion Project: We Told You So in 2017

Any time legislators give tax incentives to corporations, they must inquire about artificial intelligence and negotiate iron-clad deals. Journalists also have an obligation to research companies’ past histories with robotics.

Interpersonal Divide posted about Foxconn’s planned Wisconsin plant in July 2017, proclaiming “Say Hi to C-3PO.” At the time few politicians mentioned how Foxconn replaces workers with robots.

Foxconn’s use of artificial intelligence was covered in Interpersonal Divide in the Age of the Machine:

In 2016, Foxconn Technology Group, an Apple and Samsung supplier in China, replaced 60,000 workers with robots. If any country should be concerned about automation, China might top that list with its population of 1.35 billion people. In reporting the mass firing—a population larger than Pensacola, Florida—the British Broadcasting Company noted that economists “have issued dire warnings about how automation will affect the job market, with one report, from consultants Deloitte in partnership with Oxford University, suggesting that 35% of jobs were at risk over the next 20 years.” See:

Earlier plans to build the Wisconsin plant promised jobs to thousands of blue-collar workers making LCD screens. That changed this week. Reuters reports that Foxconn Technology Group “said it intends to hire mostly engineers and researchers rather than the manufacturing workforce the project originally promised.”

The British wire service also reported that Foxconn’s “technology hub” in Wisconsin “would largely consist of research facilities along with packaging and assembly operations.”  The report quoted a company spokesperson: “In Wisconsin we’re not building a factory. You can’t use a factory to view our Wisconsin investment.”

As companies ask states for tax incentives to build plants, promising thousands of jobs for local residents, legislators have a responsibility to make iron-clad deals. This remains an essential discussion as the state of Wisconsin may be paying as much as $2 billion in incentives, according to a report in the Chicago Tribune. The Tribune was one of the few U.S. newspapers to cite the mass firing of Foxconn workers in China to increase profit by replacing them with robots.

Any time a new plant is announced, involving tax incentives, journalists need to monitor how robotics are going to be used.

Interpersonal Divide warns that artificial intelligence and robotics are going to replace jobs across the manufacturing sector: “Two-thirds of Americans believe that robots will do much of the work currently being done by people, according to the Pew Research Center; however, 80 percent of respondents believe that their own jobs and professions will be largely unaffected.”

The only way to know in advance is to put companies on record concerning artificial intelligence and negotiate contracts that companies like Foxconn have to honor. 

TechChrunch Exposes Facebook’s “Research” Gift Card and App

Since 2017, Interpersonal Divide has depicted social media users as a class of digitally exploited workers who generate content, provide data and disclose purchases that attract advertising–without being paid. Now TechChrunch reports Facebook has been targeting some users, especially teens, with a monthly $20 gift card for access to their phones.

TechCrunch’s Josh Constine reports that Facebook has been paying users ages 13-35 up to $20 per month plus referral fees to install an iOS or Android “Facebook Research” app. The company even asked users to screenshot Amazon order histories. The payment program has been in use since 2016.

Facebook has told TechCrunch “it will shut down the iOS version of its Research app in the wake of our report,” wites Constine. Facebook’s Research program will continue to run on Android.

Following the TechCrunch report, the Verge reported the Facebook Research app requires that users install a custom root certificate, giving Facebook the ability to see users’ private messages, emails, web searches, and browsing activity. This violates Apple’s system level functionality that prohibits developers from installing certificates on users’ iPhones.

In rebutting the TechCrunch report, Facebook issued this statement:

“Key facts about this market research program are being ignored. Despite early reports, there was nothing ‘secret’ about this; it was literally called the Facebook Research App. It wasn’t ‘spying’ as all of the people who signed up to participate went through a clear on-boarding process asking for their permission and were paid to participate. Finally, less than 5 percent of the people who chose to participate in this market research program were teens. All of them with signed parental consent forms.”

TechCrunch responded to this statement by standing by its report, noting:

Facebook did not publicly promote the Research VPN itself and used intermediaries that often didn’t disclose Facebook’s involvement until users had begun the signup process. While users were given clear instructions and warnings, the program never stresses nor mentions the full extent of the data Facebook can collect through the VPN. A small fraction of the users paid may have been teens, but we stand by the newsworthiness of its choice not to exclude minors from this data collection initiative.

Interpersonal Divide has warned against this type of datamining in both the first and second editions. Here’s an excerpt:

“Patrons not only provide data. They generate content, too, raising other legal questions that have yet to be addressed. For instance, if the mega-billion-dollar social media industry is such an important component of the marketplace, and if users provide personal information mined from their devices—in addition to content, including video, audio, photography, and text—should we consider those users exploited if they receive no or little compensation for their data and content?”

The second edition of Interpersonal Divide cites Mark Andrejevic in “Social Network Exploitation,” a chapter in A Networked Self , about implications of digital media on identity, community, and culture. “What would it mean to take seriously the notion that access to online communities facilitated by social networking sites comprised a productive resource in the emerging information economy? … That is to say, what if we were to describe such sites not just as consumer services or entertaining novelties for the informated class, but as crucial information resources in the networked era?”[1]

If Andrejevic’s definition holds, it would mean billions of users worldwide may have been exploited because they spend hours each day allowing their personal information to be mined and sold. In addition, they provide content that engages others and generates more data for profit-minded creators and stockholders of Facebook, Twitter, LinkedIn, Instagram, and other popular venues.

TechCrunch reports that “Facebook is particularly interested in what teens do on their phones as the demographic has increasingly abandoned the social network in favor of Snapchat, YouTube and Facebook’s acquisition Instagram.”

Josh Constine, a technology journalist who specializes in deep analysis of social products, is currently an Editor-At-Large for TechCrunch.

[1] Mark Andrejevic, “Social Network Exploitation” in A Networked Self, ed. by Zizi Papacharissi (New York: Routledge, 2011), p. 96.


New Twitter study shows older users spread fake news

New study of 16,000 Twitter users reveals less use of bogus content on the portal than anticipated with the majority of those spreading fake news finds tending to be older and politically conservative.

A report in Science about a new Twitter study shows a mere “0.1% of the more than 16,000 users shared more than 80% of the fake news …  and 80% of that fake news appeared in the feeds of only 1.1% of users.

A team led by David Lazer, a political scientist at Northeastern University, analyzed tweets from 16,442 registered voters who used Twitter during the 2016 election.

According to the report in Science,

One of the most popular sources of misinformation identified by the study is a site called “The Gateway Pundit,” which published false headlines including: “Anti-Trump Protesters Bused Into Austin, Chicago” and “Did a Woman Say the Washington Post Offered Her $1,000 to Accuse Roy Moore of Sexual Abuse?”

The Gateway Pundit is one of several sites identified as fake news sites. Here is a list of such sites compiled by Snopes (warning: images on this page and links to fake news reports are disturbing; please do not access if you feel content will offend you.)

The Northeastern University study comes on the heels of another concerning fake news on Facebook.

A Marketwatch report found similarities between the two studies, stating the spread of false information on Facebook found “few people shared fakery, but those who did were more likely to be over 65 and conservatives.”

Interpersonal Divide in the Age of the Machine contains information in several chapters associated with social media use and how it influences our thoughts, words and deeds at home, school and work.

This site also published a guide to avoid fake news and access legitimate news sites.

Michael Bugeja, author of the guide and Interpersonal Divide, asks social media users to think like a journalist, adopting these four traits:

1. Doubt — a healthy skepticism that questions everything.
2. Detect — a “nose for news” and relentless pursuit of the truth.
3. Discern — a priority for fairness, balance and objectivity in reporting.
4. Demand — a focus on free access to information and freedom of speech, press, religion, assembly and petition.

A recent op-ed by Bugeja in the Des Moines Register also documents how fake news taints the journalism profession because users do not distinguish between journalism and media.

Here’s what you can do about fake news before the 2020 election

Des Moines Register

By Michael Bugeja, Iowa View Contributor

In 2019, Iowans will hear the phrase “fake news” whenever a report sullies a political party or presidential hopeful. We may support or scorn candidates without knowing fact from factoid.

This column explains what you can do about it.

People typically do not differentiate between journalism and media. Journalists report and edit news. Media mostly disseminate news (i.e. tweets, posts, blogs, websites, android apps, etc.). Journalists adhere to ethical standards. Social media does not.

Many voters no longer believe what they read, view or hear. We have a choice: Embrace lies and half-truths or subscribe (actually pay something) to access fact-based reports.

For the rest of the post, click here or visit:

Twitter suspends account that spread viral confrontation

CNN and Washington Post put the controversial encounter into context concerning a Native American veteran and a group of high school students wearing MAGA hats at the nation’s capitol.

This post discusses how technology spreads incomplete information via non-journalists who might have personal or political agendas. It does not attempt to discuss the specifics of the encounter but the technology behind it as an act of media manipulation.

For a more comprehensive report on the confrontation, see this article by the Washington Post, which discusses responsibility of the high school boys’ chaperones as well as what other bystanders observed.

The Post also provides a more complete version of the encounter in the above video, featuring an interview with Nathan Phillips, the Native American veteran seen chanting and playing a drum.

The student, Nick Sandman, released his own statement. In it, he writes:

The protestor [sic] everyone has seen in the video began playing his drum as he waded into the crowd, which parted for him. I did not see anyone try to block his path. He locked eyes with me and approached me, coming within inches of my face. He played his drum the entire time he was in my face. I never interacted with this protestor [sic]. I did not speak to him. I did not make any hand gestures. …  To be honest, I was startled and confused as to why he had approached me. We had worried that a situation was getting out of control where adults were attempting to provoke teenagers. I believed that by remaining motionless and calm, I was helping to diffuse the situation. …  

The viral video only showed what appeared to be a confrontation between Sandman and Phillips with the student intentionally blocking the veteran’s path.

CNN business reported the following about the viral video:

    • A more complete video was posted on Instagram by a person attending the event.
    • An account of the video by @2020fight featured only the segment of Sandman and Phillips with this caption: “This MAGA loser gleefully bothering a Native American protester at the Indigenous Peoples March.”
    • 2020@fight was said to belong to a California schoolteacher. But the profile photo depicted a Brazilian blogger.
    • The 2020@fight account tweeted on average 130 times a day and had more than 40,000 followers.
    • A network of anonymous accounts were working to amplify the video.
    • Multiple newsrooms tried unsuccessfully to contact @2020fight.

After @2020fight’s video was released, it made national news and was retweeted 14,400 times, according to CNN Business.

The Washington Post video above shows how a journalist would have handled the encounter, interviewing a main participant and also not sensationalizing the taunts directed at the students by a small group of protesters. It showcases attempts to be fair and balanced … after the fact.

Conversely, many media outlets ran with the viral version of the encounter without vetting it as CNN and the Washington Post did later. By then, however, the controversial video had been viewed on social media more than 2.5 million times.

Interpersonal Divide in the Age of the Machine discusses how technology manipulates media with sections about fake news that drive political agendas, as happened in the 2016 presidential election.