Author: Michael Bugeja

Foxconn Reconsiders $10 Billion Project: We Told You So in 2017

Any time legislators give tax incentives to corporations, they must inquire about artificial intelligence and negotiate iron-clad deals. Journalists also have an obligation to research companies’ past histories with robotics.

Interpersonal Divide posted about Foxconn’s planned Wisconsin plant in July 2017, proclaiming “Say Hi to C-3PO.” At the time few politicians mentioned how Foxconn replaces workers with robots.

Foxconn’s use of artificial intelligence was covered in Interpersonal Divide in the Age of the Machine:

In 2016, Foxconn Technology Group, an Apple and Samsung supplier in China, replaced 60,000 workers with robots. If any country should be concerned about automation, China might top that list with its population of 1.35 billion people. In reporting the mass firing—a population larger than Pensacola, Florida—the British Broadcasting Company noted that economists “have issued dire warnings about how automation will affect the job market, with one report, from consultants Deloitte in partnership with Oxford University, suggesting that 35% of jobs were at risk over the next 20 years.” See:

Earlier plans to build the Wisconsin plant promised jobs to thousands of blue-collar workers making LCD screens. That changed this week. Reuters reports that Foxconn Technology Group “said it intends to hire mostly engineers and researchers rather than the manufacturing workforce the project originally promised.”

The British wire service also reported that Foxconn’s “technology hub” in Wisconsin “would largely consist of research facilities along with packaging and assembly operations.”  The report quoted a company spokesperson: “In Wisconsin we’re not building a factory. You can’t use a factory to view our Wisconsin investment.”

As companies ask states for tax incentives to build plants, promising thousands of jobs for local residents, legislators have a responsibility to make iron-clad deals. This remains an essential discussion as the state of Wisconsin may be paying as much as $2 billion in incentives, according to a report in the Chicago Tribune. The Tribune was one of the few U.S. newspapers to cite the mass firing of Foxconn workers in China to increase profit by replacing them with robots.

Any time a new plant is announced, involving tax incentives, journalists need to monitor how robotics are going to be used.

Interpersonal Divide warns that artificial intelligence and robotics are going to replace jobs across the manufacturing sector: “Two-thirds of Americans believe that robots will do much of the work currently being done by people, according to the Pew Research Center; however, 80 percent of respondents believe that their own jobs and professions will be largely unaffected.”

The only way to know in advance is to put companies on record concerning artificial intelligence and negotiate contracts that companies like Foxconn have to honor. 

TechChrunch Exposes Facebook’s “Research” Gift Card and App

Since 2017, Interpersonal Divide has depicted social media users as a class of digitally exploited workers who generate content, provide data and disclose purchases that attract advertising–without being paid. Now TechChrunch reports Facebook has been targeting some users, especially teens, with a monthly $20 gift card for access to their phones.

TechCrunch’s Josh Constine reports that Facebook has been paying users ages 13-35 up to $20 per month plus referral fees to install an iOS or Android “Facebook Research” app. The company even asked users to screenshot Amazon order histories. The payment program has been in use since 2016.

Facebook has told TechCrunch “it will shut down the iOS version of its Research app in the wake of our report,” wites Constine. Facebook’s Research program will continue to run on Android.

Following the TechCrunch report, the Verge reported the Facebook Research app requires that users install a custom root certificate, giving Facebook the ability to see users’ private messages, emails, web searches, and browsing activity. This violates Apple’s system level functionality that prohibits developers from installing certificates on users’ iPhones.

In rebutting the TechCrunch report, Facebook issued this statement:

“Key facts about this market research program are being ignored. Despite early reports, there was nothing ‘secret’ about this; it was literally called the Facebook Research App. It wasn’t ‘spying’ as all of the people who signed up to participate went through a clear on-boarding process asking for their permission and were paid to participate. Finally, less than 5 percent of the people who chose to participate in this market research program were teens. All of them with signed parental consent forms.”

TechCrunch responded to this statement by standing by its report, noting:

Facebook did not publicly promote the Research VPN itself and used intermediaries that often didn’t disclose Facebook’s involvement until users had begun the signup process. While users were given clear instructions and warnings, the program never stresses nor mentions the full extent of the data Facebook can collect through the VPN. A small fraction of the users paid may have been teens, but we stand by the newsworthiness of its choice not to exclude minors from this data collection initiative.

Interpersonal Divide has warned against this type of datamining in both the first and second editions. Here’s an excerpt:

“Patrons not only provide data. They generate content, too, raising other legal questions that have yet to be addressed. For instance, if the mega-billion-dollar social media industry is such an important component of the marketplace, and if users provide personal information mined from their devices—in addition to content, including video, audio, photography, and text—should we consider those users exploited if they receive no or little compensation for their data and content?”

The second edition of Interpersonal Divide cites Mark Andrejevic in “Social Network Exploitation,” a chapter in A Networked Self , about implications of digital media on identity, community, and culture. “What would it mean to take seriously the notion that access to online communities facilitated by social networking sites comprised a productive resource in the emerging information economy? … That is to say, what if we were to describe such sites not just as consumer services or entertaining novelties for the informated class, but as crucial information resources in the networked era?”[1]

If Andrejevic’s definition holds, it would mean billions of users worldwide may have been exploited because they spend hours each day allowing their personal information to be mined and sold. In addition, they provide content that engages others and generates more data for profit-minded creators and stockholders of Facebook, Twitter, LinkedIn, Instagram, and other popular venues.

TechCrunch reports that “Facebook is particularly interested in what teens do on their phones as the demographic has increasingly abandoned the social network in favor of Snapchat, YouTube and Facebook’s acquisition Instagram.”

Josh Constine, a technology journalist who specializes in deep analysis of social products, is currently an Editor-At-Large for TechCrunch.

[1] Mark Andrejevic, “Social Network Exploitation” in A Networked Self, ed. by Zizi Papacharissi (New York: Routledge, 2011), p. 96.


New Twitter study shows older users spread fake news

New study of 16,000 Twitter users reveals less use of bogus content on the portal than anticipated with the majority of those spreading fake news finds tending to be older and politically conservative.

A report in Science about a new Twitter study shows a mere “0.1% of the more than 16,000 users shared more than 80% of the fake news …  and 80% of that fake news appeared in the feeds of only 1.1% of users.

A team led by David Lazer, a political scientist at Northeastern University, analyzed tweets from 16,442 registered voters who used Twitter during the 2016 election.

According to the report in Science,

One of the most popular sources of misinformation identified by the study is a site called “The Gateway Pundit,” which published false headlines including: “Anti-Trump Protesters Bused Into Austin, Chicago” and “Did a Woman Say the Washington Post Offered Her $1,000 to Accuse Roy Moore of Sexual Abuse?”

The Gateway Pundit is one of several sites identified as fake news sites. Here is a list of such sites compiled by Snopes (warning: images on this page and links to fake news reports are disturbing; please do not access if you feel content will offend you.)

The Northeastern University study comes on the heels of another concerning fake news on Facebook.

A Marketwatch report found similarities between the two studies, stating the spread of false information on Facebook found “few people shared fakery, but those who did were more likely to be over 65 and conservatives.”

Interpersonal Divide in the Age of the Machine contains information in several chapters associated with social media use and how it influences our thoughts, words and deeds at home, school and work.

This site also published a guide to avoid fake news and access legitimate news sites.

Michael Bugeja, author of the guide and Interpersonal Divide, asks social media users to think like a journalist, adopting these four traits:

1. Doubt — a healthy skepticism that questions everything.
2. Detect — a “nose for news” and relentless pursuit of the truth.
3. Discern — a priority for fairness, balance and objectivity in reporting.
4. Demand — a focus on free access to information and freedom of speech, press, religion, assembly and petition.

A recent op-ed by Bugeja in the Des Moines Register also documents how fake news taints the journalism profession because users do not distinguish between journalism and media.

Here’s what you can do about fake news before the 2020 election

Des Moines Register

By Michael Bugeja, Iowa View Contributor

In 2019, Iowans will hear the phrase “fake news” whenever a report sullies a political party or presidential hopeful. We may support or scorn candidates without knowing fact from factoid.

This column explains what you can do about it.

People typically do not differentiate between journalism and media. Journalists report and edit news. Media mostly disseminate news (i.e. tweets, posts, blogs, websites, android apps, etc.). Journalists adhere to ethical standards. Social media does not.

Many voters no longer believe what they read, view or hear. We have a choice: Embrace lies and half-truths or subscribe (actually pay something) to access fact-based reports.

For the rest of the post, click here or visit:

Twitter suspends account that spread viral confrontation

CNN and Washington Post put the controversial encounter into context concerning a Native American veteran and a group of high school students wearing MAGA hats at the nation’s capitol.

This post discusses how technology spreads incomplete information via non-journalists who might have personal or political agendas. It does not attempt to discuss the specifics of the encounter but the technology behind it as an act of media manipulation.

For a more comprehensive report on the confrontation, see this article by the Washington Post, which discusses responsibility of the high school boys’ chaperones as well as what other bystanders observed.

The Post also provides a more complete version of the encounter in the above video, featuring an interview with Nathan Phillips, the Native American veteran seen chanting and playing a drum.

The student, Nick Sandman, released his own statement. In it, he writes:

The protestor [sic] everyone has seen in the video began playing his drum as he waded into the crowd, which parted for him. I did not see anyone try to block his path. He locked eyes with me and approached me, coming within inches of my face. He played his drum the entire time he was in my face. I never interacted with this protestor [sic]. I did not speak to him. I did not make any hand gestures. …  To be honest, I was startled and confused as to why he had approached me. We had worried that a situation was getting out of control where adults were attempting to provoke teenagers. I believed that by remaining motionless and calm, I was helping to diffuse the situation. …  

The viral video only showed what appeared to be a confrontation between Sandman and Phillips with the student intentionally blocking the veteran’s path.

CNN business reported the following about the viral video:

    • A more complete video was posted on Instagram by a person attending the event.
    • An account of the video by @2020fight featured only the segment of Sandman and Phillips with this caption: “This MAGA loser gleefully bothering a Native American protester at the Indigenous Peoples March.”
    • 2020@fight was said to belong to a California schoolteacher. But the profile photo depicted a Brazilian blogger.
    • The 2020@fight account tweeted on average 130 times a day and had more than 40,000 followers.
    • A network of anonymous accounts were working to amplify the video.
    • Multiple newsrooms tried unsuccessfully to contact @2020fight.

After @2020fight’s video was released, it made national news and was retweeted 14,400 times, according to CNN Business.

The Washington Post video above shows how a journalist would have handled the encounter, interviewing a main participant and also not sensationalizing the taunts directed at the students by a small group of protesters. It showcases attempts to be fair and balanced … after the fact.

Conversely, many media outlets ran with the viral version of the encounter without vetting it as CNN and the Washington Post did later. By then, however, the controversial video had been viewed on social media more than 2.5 million times.

Interpersonal Divide in the Age of the Machine discusses how technology manipulates media with sections about fake news that drive political agendas, as happened in the 2016 presidential election.

Should the Media Have Reported Un-Redacted Manafort Content?

Omitted from the buzz about the poorly redacted court filings associated with former Trump campaign manager Paul Manafort is the ethics of un-redacting and reporting sensitive content filed in U.S. courts.

In the 2016 presidential campaign, digital subterfuge was a key component, from creation of fake news to sale of Facebook user data. You’d think court filings on convicted Trump-campaign associate Paul Manafort might have been properly redacted.


Reporters hungry for more information about Special Counsel Robert Mueller’s investigation checked to see if a mistake was made in redacting a sensitive document prepared by Manafort’s attorneys.

It’s a common error. Tech experts will tell you that thousands of redacted documents online can be easily manipulated to view content. Often a staff person or official uses black boxes that can be moved or removed from a document or selects and conceals passages with black background, which of course and be removed. Just select the passage and use a white background, exposing the text.

In this case, Manafort’s lawyers had filed a response to an allegation that he lied to prosecutors. However, on page 5, either his attorneys or Mueller’s staffers did not “flatten” the PDF so that the redacted passages could not be read.

Adobe has a tool that properly redacts (i.e. flattens) content, also shown in the above video. Other ways include taking a photo of the document and making a PDF out of that or printing the document, using a felt pen to redact and scanning it back into a PDF.

An ethical question, largely ignored by the media, is whether reporters should have disclosed the sensitive information as it was not intended for public consumption. Perhaps the disclosure would cause prosecutors or defense attorneys to change their strategy or even taint the ongoing investigation.

The media associated the disclosure with collusion, reporting that Manafort may have met with a Russian intelligence contact and provided polling data from the Trump campaign.

According to the Washington Post:

Attorneys for Paul Manafort, Trump’s former campaign chairman, inadvertently included a big reveal in a court filing on Tuesday through their clumsy failure to properly redact key portions. They admitted that during the 2016 campaign Manafort and his longtime associate Konstantin Kilimnik, who the FBI has said has ties to Russian intelligence, discussed a peace plan for Ukraine and that Manafort also shared with him political polling data.

As for media ethics, It seems the standard seems situational: “If you make a digital mistake, we are absolved and so can report confidential information.”

Perhaps not in this case, but one nevertheless can imagine other scenarios when the dissemination of such information could pose a national security threat.

In the digital age, someone viewing improperly redacted court filings is going to disclose the content. As soon as one party disseminates that, others will un-redact and report.

Ultimately, then, the government and officers of the court have a responsibility to know how to use digital tools before filing sensitive documents in the U.S. Court system.

Washington Post: Deep Fake AI Technology Targets Women

WARNING: Sensitive material. Content involves artificial intelligence weaponized against women.

The Washington Post reports a new disturbing use of artificial intelligence–in a free app, no less–that enables users to past the image of anyone onto the face of someone else depicted in a video. The menu of “deepfake” unethical issues are myriad but increasingly target women.

According to a Dec. 30, 2018 article by Drew Harwell,   

Supercharged by powerful and widely available artificial-intelligence software developed by Google, these lifelike “deepfake” videos have quickly multiplied across the Internet, blurring the line between truth and lie. But the videos have also been weaponized disproportionately against women, representing a new and degrading means of humiliation, harassment and abuse.

The Post reports that actress Scarlett Johansson’s face has been superimposed into dozens of graphic sex scenes now available on Internet. There is also a growing concern that the technology can use images from social media like Facebook and superimpose them on similar explicit videos as a new type of AI revenge porn.

The fakes “are explicitly detailed, posted on popular porn sites and increasingly challenging to detect.” Worse, the Post article states that victims may have little recourse as the legality of the technology has yet to be challenged and may even be protected by the First Amendment unless associated with existing laws on defamation, identity theft or fraud.

An anonymous online community of creators is instructing others on how to create deepfake videos, a dangerous new weapon in the troll arsenal.

Interpersonal Divide in the Age of the Machine covers similar abuses of AI in several chapters, prophesying these new technologies will erode our perception of the world so that we no longer can discern what is real or fake. That impacts how we interact with others and the manner in which we experience the world.

The thesis of the book documents how “moral code is corrupted by machine code.”