Author: Michael Bugeja

Your Portal to Privacy Invasion

From the editor: Michael Bugeja is an award-winning professor at the Greenlee School of Journalism and Communication at Iowa State University. He’s an early critic of digital technology, recognizing that the mediation of interpersonal relationships via screens would pose societal and ethical problems. He’s been examining this phenomenon for years, including with his 2005 book Interpersonal Divide: The Search for Community in a Technological Age, 2017’s update Interpersonal Divide in the Age of the Machine, and an ongoing blog at Interpersonal-Divde.org that ties current events to briefs on the same themes. We’re grateful that Michael will occasionally share some of his topical posts with us here at The Technoskeptic, where we may include some extra contextual information for our readers.
By  |  | MediaPrivacy

This one comes on the heels of Facebook’s latest product announcement, an honest-to-goodness piece of physical hardware called Portal, a video phone with a “smart camera” which pans and zooms to track users:

The latest Facebook feature, Portal, is really only a hands-free video chatting device that follows you in your own home or wherever you plant the dang thing, including your office or classroom (and yes, some early adopter assistant professor will do that and publish a paper titled: “Framing the Frame: Facebook Portal’s Integration in Blended Course Development.”)

Give me a break. Or better still, Facebook give us a break.

For the rest of the post, click here or visit: https://thetechnoskeptic.com/portal-privacy-invasion/

WHO-TV News: Social Media Fueling Fire For Hate Groups To Act

socialmedia

AMES, Iowa — It used to be a simpler  more civil time.  “Iowa used to be the pillar of community standards when we had face to face interaction with our neighbors.”

As society has become enthralled in social media, Michael Bugeja, professor at the Greenlee School of Journalism and Communication at Iowa State University, says civility and hate are getting worse.

“As we gravitate more online we have to understand it gives us the convenience of sharing our views with little consequence,” he said.

Robert Bowers, the alleged shooter in the deadly Pittsburgh synagogue shooting, relayed his hate speech against Jews on a social media networking site called Gab just moments before the attack. Bugeja said, “Sometimes many people get overlooked and where they get accepted is on those fringes.”

Bugeja who has authored Interpersonal Divide in the Age of the Machine believes Bowers and others who support his views are a population on the fringe of society that have found acceptance through these beliefs on public forums like social media.

“Many on the fringe suffer from severe anger.  When they hear uncivil speech and media or what sounds like incitement to do an act, those people on the fringe will believe their time has come. That they’ve been right all along.”

For the rest of the article, visit WHO-TV at this URL: https://whotv.com/2018/10/29/social-media-fueling-fire-for-hate-groups-to-act/

Gab offline following Pittsburgh Synagogue attack

20181029-gab-website-screenshot

The above statement appeared today on the gab.com social network, which allows all manner of speech, including what many dub hate speech, following a shooting spree by one of its users at a Pittsburgh synagogue where 11 worshipers were slain.

Alleged shooter Robert Bowers had posted a bio that slandered Jews as “the children of Satan.” Shortly before the attack on the Tree of Life synagogue, he posted on Gab: “Screw your optics, I’m going in.”

According to USA Today, hosting provider GoDaddy gave Gab 24 hours to switch providers because the website violated its terms of service: “GoDaddy investigated and discovered numerous instances of content on the site that both promotes and encourages violence against people.”

USA Today also noted that Medium, an online publishing tool, suspended Gab’s account because the platform was used to disseminate statements associated with violence, including one right after the synagogue attack on Saturday.

Mashable reported that Gab has been banned by PayPal, “and fellow online payment service Stripe is looking to cut off the site. Gab’s new hosting service, Joyent, reportedly will suspend the site from 9 a.m. ET on Monday, Oct. 29.”

Gab was created in 2016 as an alternative to traditional social networks like Twitter and Facebook. Shortly after going online, founder Andrew Torba told BuzzFeed:

“What makes the entirely left-leaning Big Social monopoly qualified to tell us what is ‘news’ and what is ‘trending’ and to define what “harassment” means? It didn’t feel right to me, and I wanted to change it, and give people something that would be fair and just.”

The New York Times noted that Gab gave far-right activists like white nationalist leader Richard B. Spencer a platform by which to express what many would label repugnant views. In 2016, critic-at-large Amanda Hess wrote that Gab is “a throwback to the freewheeling norms of the old internet, before Twitter started cracking down on harassment and Reddit cleaned out its darkest corners.”

This week the Washington Post reported that Torba is taking steps to revive Gab, vowing that he will rebuild it from the ground up, if necessary. The Post writes that Torba has become “a charismatic leader of the ‘alt-tech’ movement which, among other things, dedicates itself to protecting and building tech to house ‘free speech’ — including extremist ideologies that are increasingly unwelcome on mainstream sites.”

Gab does prohibit threats of violence, illegal pornography, or posting of private information without consent. A key feature has been platform tools that allow users to filter out objectionable content.

Revisiting ProPublica’s Report on Algorithmic Hate Speech

Last year ProPublica investigated Facebook’s hate speech algorithms learning that moderators were being taught to elevate “white men” over “black children” as a protected class. It’s worth revisiting to show how the complexities of the English language confound machine logic.

Machines correlate without causation. That’s a key concept in Interpersonal Divide’s critique of “artificial intelligence.” Technical systems are adept at answering 4 of the 5 “Ws” and H of mass communication: Who, What, When, Where and How.

Those are the only qualifiers you need to make a sale. Social media, especially Facebook, sell to and surveil us simultaneously whenever we feed its algorithms. If we receive a new pair of shoes in the mail for our birthday, and we display them, thanking Grandma, the machine knows who got what gift when and how from where.  That’s the point. That’s social networks create value via consumer narratives.

Interpersonal Divide cites computer scientist and author Jaron Lanier’s explanation. Machines with copious amounts of data may be able to discern odd commercial truths: People with bushy eyebrows who like purple toadstools in spring might hanker for hot sauce on mashed potatoes in autumn. That would enable a hot sauce vendor to place a link in front of bushy-eyebrowed Facebookers posting toadstool photos, increasing the chance of a sale, “and no one need ever know why.”[1]

The narrative knows:

  • Who: people with bushy eyebrows.
  • What: hot sauce
  • When: autumn
  • Where: Facebook IP address
  • How: on mashed potatoes

No one ever need know Why. A sale is a sale is a sale.

When it comes to Facebook’s algorithm, however, we do know why “White Men” outrank “black children” according to machine logic. The algorithm, which purportedly has been tweaked since the ProPublica report, bases hate speech on what seems at first blush a logical foundation. If a suspected hate message targets a protected class, such as race and gender (white men), that trumps a class modified by subset such as age (black children).

Of course, the English language doesn’t work this way, especially since one word may have multiple meanings that change based on its position in a sentence. Rearrange words of this sentence–“Stop drinking that this instant; tea is better for you!“–and you get several variations, such as “Better stop drinking that; this instant tea is for you.”

As the ProPublica noted, Facebook allowed U.S. Congressman Clay Higgins to threaten “radicalized” Muslims with this post: “Hunt them, identify them, and kill them. Kill them all. For the sake of all that is good and righteous. Kill them all.”

However Facebook removed this post from Boston poet and Black Lives Matter activist Didi Delgado: “All white people are racist. Start from this reference point, or you’ve already failed.

Why? Human monitors trained by machine to think like one followed the algorithmic rule that “white people” + attack (racist) trumped “radicalized” (subset) Muslims. Everyone seemed to miss “hunt” and “kill them all.”

This illustration depicts how that could have happened.

Facebook Bias

Interpersonal Divide asks readers to understand technology from a programming rather than consumer perspective so as to explain “why” things happen in the age of the machine.

This is one small incident that indicates a larger issue of machines correlating on biased data with flawed computer logic. You can read more about Facebook rules by visiting these sites referenced in this report:


[1] Jaron Lanier, Who Owns the Future (New York: Simon and Schuster, 2013), p. 115.

 

Kavanaugh-Ford Hoaxes Appeal to Base–Instinct, That Is

 

Over the weekend in Facebook and Twitter feeds, Americans–not Russians–perpetuated false claims seeking to play to their “base,” a word whose first meaning is defined as “the lowest part,” as in base instinct.

The goal of partisan trolls was to debase the names and reputations of assault survivor Christine Blasey Ford and Supreme Court nominee Brett Kavanaugh. Sensational claims about both have been shown to be baseless.

The distressing news, however, was that these false reports–1982 photos of a drunk Kavanaugh and a series of photos depicting Ford as a Democratic operative–were believed by many, flooding the internet and spreading to friends listed in social media accounts.

Sadly, lies have been shown to travel faster and farther than truth, according to Slate.

Thankfully, Snopes.com has been able to post refutations almost as soon as the false accounts were posted.

Concerning the Kavanaugh photo, it stated:

While the picture on the right is, in fact, Brett Kavanaugh, the picture of the passed-out man on the left is a Getty Images stock photo titled “portrait of a young man asleep on the couch after drinking too much beer” that was created long after 1982.

Concerning the Ford photo, it stated:

This photograph was taken on 12 November 2016 at a protest against President Trump in New York City by photographer Christopher Penler. The image is available on a variety of stock photographwebsites, where it is consistently presented as an image of an anonymous woman with a “Not My President” sign. It wasn’t until Christine Blasey Ford came forward with an allegation of sexual assault against Supreme Court nominee Brett Kavanaugh in September 2018 that the picture started circulating with Ford’s name attached to it.

It is important to recognize that hoaxes play on the deeply held beliefs, fears, convictions and desires of the mass media audience. In controversial political news, such as Ford’s allegation of sexual violence, conditions were rife for fake news and hoaxes.

For the record, here is the Sept. 27 transcript of the Kavanaugh hearing, supplied by the Washington Post.

Interpersonal Divide in the Age of the Machine cautions readers about Internet trolls and how they influence public perception. Here’s an excerpt:

Hoaxes. Hacks. Stunts. Pranks. Fraud. Counterfeits. Conspiracy theories. Altered photographs. Doctored records. Viral videos. Facts died in the process. “The era of the fact is coming to an end,” writes Harvard historian Jill Lepore in the New Yorker, creating mayhem, “not least because the collection and weighing of facts require investigation, discernment, and judgment, while the collection and analysis of data are outsourced to machines.”

The loss of fact has led to other interpersonal losses. Thus, it is important for everyone who uses social media to fact-check claims on Snopes.com or traditional news sites.

Weaponizing Wikipedia: GOP Senators Doxed

Doxing is the practice of sharing private information about an individual via use of “publicly available databases and social media websites, hacking, and social engineering”–Wikipedia

As the world watched political and personal strategies play out in the Sept. 27, 2018 Supreme Court hearings, another digital strategy was being launched against GOP senators: doxing.

According to The Washington Post, Lindsey Graham (R-S.C.) was one of three Republicans whose phone numbers and home addresses were added to Wikipedia biography pages. This occurred while Graham was questioning Supreme Court nominee Bret Kavanaugh.

Utah Sens. Mike Lee (R) and Orrin G. Hatch (R) were similarly doxed.

The Post published this screenshot redacting private information about Hatch.

But this was not the end of the Wikipedia incident. After the private information was removed from its website,  addresses and phone numbers were circulated again on Twitter via the account @congressedits, which The Post described as “a social media ‘accountability bot’ that tweets edits to the online encyclopedia made from IP addresses assigned to the U.S. Capitol,” taking a screenshot and sending that to 65,000 followers.

@congressedits
Type of site
Twitter account
Available in English
Website twitter.com/congressedits
Launched July 8, 2014; 4 years ago
Current status Online

Wikipedia states that @congressedits tweets changes made by “anyone using a computer on the U.S. Capitol complex’s computer network, including both staff of U.S. elected representatives and senators as well as visitors such as journalists, constituents, tourists, and lobbyists.”

While the news media consider @congressedits a digital watchdog, in as much as reporters instantly see what House and Senate aides are posting about their bosses, doxing remains a semi-anonymous weapon in the digital arsenal of partisan politics. Typically, content such as this can be traced to an IP address, indicating where the doxing took place (in this case, from a House computer).

Tracking the IP address may narrow the number of suspects, but plausible deniability is an alibi in as much as staffers can claim, “It wasn’t me.”

That’s partly true. It was the technology.

Once again, this incident shows the nature of technology. Purpose and programming–meant for transparency and public access–were weaponsized during live testimony in a historic proceeding.

Interpersonal Divide in the Age of the Machine devotes several chapters to the nature of technology, i.e. that of a scorpion. It is what it is.  Here’s a citation:

The French-Maltese philosopher Jacques Ellul believed that technology is “a self-determining organism or end in itself whose autonomy transformed centuries’ old systems while being scarcely modified in its own features.”[1] In simple terms, that means that technology changes everything it touches without changing much itself. Introduce technology into the economy, and the economy is all about technology. Introduce it into the home, and home life is about the technology. Introduce it into school systems, and education is about the technology. Introduce it into employment, and you have the same effect.

Introduce it into an “accountability” bot such as @congressedits, and the bot no longer is about accountability but doxing to shape public opinion according to partisan politics.


[1] Jacques Ellul, “The Autonomy of the Technological Phenomenon, in Scharff, Robert C. & V. Dusek, (eds.), Philosophy of Technology: The Technological Condition, Mass: Blackwell, 2003, p. 346.

When Public Space Becomes Unsafe

With a spate of recent daylight murders garnering national attention–a female jogger and golfer in Iowa and another jogger in New York City–one wonders whether the concept of “Take Back the Night,” an effort to to end violence, especially against women, should be revised to “Take Back the Day.”

A recent Gallup poll shows close to 40 percent of adults–45 percent women, 27 percent men–believe the immediate area around their home may be unsafe to walk alone at night.

Daylight violence is disturbing because of its brazen disregard for witnesses. According to the U.S. Department of Justice, “the number of violent crimes committed by adults increases hourly from 6 a.m. through the afternoon and evening hours, peaks at 10 p.m., and then drops to a low point at 6 a.m.”

Violent crimes by juveniles hit a high point between 3 p.m. and 4 p.m., the hour immediately following the end of the school day.

Many variables affect our perception of safety. As the website “Safe Communities” posits, factors include life experiences, beliefs, type of community, age, socioeconomic status, type of job and employment status, to name a few.

For insight, we might look to the philosophy of social activist Parker J. Palmer who wrote that the most public place is the street where people send a message through the channel of their bodies in real place, acknowledging that “we occupy the same territory, belong to the same human community.”[1]

Cited in Interpersonal Divide, Parker discusses how suburban sprawl changed our notion of community. For instance, in the 1980s, mega malls replaced Main Street, which later was deemed unsafe. Then malls were deemed unsafe.

In his 1981 book, The Company of Strangers, Parker made this prophetic statement:

When people perceive real habitat to be unsafe, they withdraw from it, and it becomes unsafe. “Space is kept secure not primarily by good lighting or police power but by the presence of a healthy public life.”[2]

Perhaps it is time for society to assess whether increasing use of technology has played a role in the withdrawal from community as Parker had envisioned it, a communal and, in many ways, vibrant space. If we opt to spend more time in virtual rather than real habitat, even as we walk the digital streets, we may lose sight of what it means to occupy the same territory with neighbors and our moral obligation to nurture and monitor our collective interactions there.

It is also important to note that use of technology may mitigate risk. New digital products–wearables like Athena and Safer Pro–have been developed to send emergency alerts with GPS tracking to friends and loved ones.

[1] Parker J. Palmer, The Company of Strangers (New York: Crossroad, 1981), p. 39.

[2] Palmer, p. 48.