Weaponizing Wikipedia: GOP Senators Doxed

Doxing is the practice of sharing private information about an individual via use of “publicly available databases and social media websites, hacking, and social engineering”–Wikipedia

As the world watched political and personal strategies play out in the Sept. 27, 2018 Supreme Court hearings, another digital strategy was being launched against GOP senators: doxing.

According to The Washington Post, Lindsey Graham (R-S.C.) was one of three Republicans whose phone numbers and home addresses were added to Wikipedia biography pages. This occurred while Graham was questioning Supreme Court nominee Bret Kavanaugh.

Utah Sens. Mike Lee (R) and Orrin G. Hatch (R) were similarly doxed.

The Post published this screenshot redacting private information about Hatch.

But this was not the end of the Wikipedia incident. After the private information was removed from its website,  addresses and phone numbers were circulated again on Twitter via the account @congressedits, which The Post described as “a social media ‘accountability bot’ that tweets edits to the online encyclopedia made from IP addresses assigned to the U.S. Capitol,” taking a screenshot and sending that to 65,000 followers.

@congressedits
Type of site
Twitter account
Available in English
Website twitter.com/congressedits
Launched July 8, 2014; 4 years ago
Current status Online

Wikipedia states that @congressedits tweets changes made by “anyone using a computer on the U.S. Capitol complex’s computer network, including both staff of U.S. elected representatives and senators as well as visitors such as journalists, constituents, tourists, and lobbyists.”

While the news media consider @congressedits a digital watchdog, in as much as reporters instantly see what House and Senate aides are posting about their bosses, doxing remains a semi-anonymous weapon in the digital arsenal of partisan politics. Typically, content such as this can be traced to an IP address, indicating where the doxing took place (in this case, from a House computer).

Tracking the IP address may narrow the number of suspects, but plausible deniability is an alibi in as much as staffers can claim, “It wasn’t me.”

That’s partly true. It was the technology.

Once again, this incident shows the nature of technology. Purpose and programming–meant for transparency and public access–were weaponsized during live testimony in a historic proceeding.

Interpersonal Divide in the Age of the Machine devotes several chapters to the nature of technology, i.e. that of a scorpion. It is what it is.  Here’s a citation:

The French-Maltese philosopher Jacques Ellul believed that technology is “a self-determining organism or end in itself whose autonomy transformed centuries’ old systems while being scarcely modified in its own features.”[1] In simple terms, that means that technology changes everything it touches without changing much itself. Introduce technology into the economy, and the economy is all about technology. Introduce it into the home, and home life is about the technology. Introduce it into school systems, and education is about the technology. Introduce it into employment, and you have the same effect.

Introduce it into an “accountability” bot such as @congressedits, and the bot no longer is about accountability but doxing to shape public opinion according to partisan politics.


[1] Jacques Ellul, “The Autonomy of the Technological Phenomenon, in Scharff, Robert C. & V. Dusek, (eds.), Philosophy of Technology: The Technological Condition, Mass: Blackwell, 2003, p. 346.

When Public Space Becomes Unsafe

With a spate of recent daylight murders garnering national attention–a female jogger and golfer in Iowa and another jogger in New York City–one wonders whether the concept of “Take Back the Night,” an effort to to end violence, especially against women, should be revised to “Take Back the Day.”

A recent Gallup poll shows close to 40 percent of adults–45 percent women, 27 percent men–believe the immediate area around their home may be unsafe to walk alone at night.

Daylight violence is disturbing because of its brazen disregard for witnesses. According to the U.S. Department of Justice, “the number of violent crimes committed by adults increases hourly from 6 a.m. through the afternoon and evening hours, peaks at 10 p.m., and then drops to a low point at 6 a.m.”

Violent crimes by juveniles hit a high point between 3 p.m. and 4 p.m., the hour immediately following the end of the school day.

Many variables affect our perception of safety. As the website “Safe Communities” posits, factors include life experiences, beliefs, type of community, age, socioeconomic status, type of job and employment status, to name a few.

For insight, we might look to the philosophy of social activist Parker J. Palmer who wrote that the most public place is the street where people send a message through the channel of their bodies in real place, acknowledging that “we occupy the same territory, belong to the same human community.”[1]

Cited in Interpersonal Divide, Parker discusses how suburban sprawl changed our notion of community. For instance, in the 1980s, mega malls replaced Main Street, which later was deemed unsafe. Then malls were deemed unsafe.

In his 1981 book, The Company of Strangers, Parker made this prophetic statement:

When people perceive real habitat to be unsafe, they withdraw from it, and it becomes unsafe. “Space is kept secure not primarily by good lighting or police power but by the presence of a healthy public life.”[2]

Perhaps it is time for society to assess whether increasing use of technology has played a role in the withdrawal from community as Parker had envisioned it, a communal and, in many ways, vibrant space. If we opt to spend more time in virtual rather than real habitat, even as we walk the digital streets, we may lose sight of what it means to occupy the same territory with neighbors and our moral obligation to nurture and monitor our collective interactions there.

It is also important to note that use of technology may mitigate risk. New digital products–wearables like Athena and Safer Pro–have been developed to send emergency alerts with GPS tracking to friends and loved ones.

[1] Parker J. Palmer, The Company of Strangers (New York: Crossroad, 1981), p. 39.

[2] Palmer, p. 48.

Interpersonal Divide Favorably Reviewed in International Journal of Communication

The following is from the introduction and conclusion of the review by Min Wang (International Journal of Communication 12 [2018], Book Review 3776–3779 1932–8036/2018BKR0009)

Interpersonal Divide in the Age of the Machine … will likely appeal to students and scholars in a great variety of disciplines, including media studies, communication ethics, interpersonal communication, media literacy, psychology, sociology, data science, information technology, and science and technology studies.

In plain language and jargon-free prose, Bugeja fulfills his goal to address the impact of media and technology on human communities, universal principles, cultural values, and interpersonal relationships. His creative writing style makes Interpersonal Divide in the Age of the Machine accessible to multidisciplinary readers who wish to explore how media and technology, particularly big data and artificial intelligence, structure our lives. The critically-reviewed literature and abundant evidence support the viewpoints, arguments, and predictions in the book in an eloquent manner. The well-designed end-of-chapter exercises are directed interactively at students who can report the results of their exercises and experiments through discussion and debate, providing an outlet to inspire ideas, dialogue, and introspection.

http://ijoc.org/index.php/ijoc/article/view/10236/2455

Digital Crazytown and the Anonymous Memo

Unethical media and politics have combined to create “Digital Crazytown.”

On Sept. 5, the New York Times published an anonymous memo by a senior official in the Trump Administration who called the president so amoral that his “appointees have vowed to do what we can to preserve our democratic institutions while thwarting Mr. Trump’s more misguided impulses until he is out of office.”

The President is so irate that he believes the source of the memo may have committed “treason,” prompting dozens of his top officials to claim they were not the author.

One of those was Chief of Staff John Kelly, cited in Bob Woodward’s new book Fear as stating:

“He’s an idiot. It’s pointless to try to convince him of anything. He’s gone off the rails. We’re in Crazytown”

Crazytown is an apt phrase describing the milieu in Washington.

As author of Interpersonal Divide in the Age of the Machine  and Living Media EthicsI can comment on two lingering questions concerning this issue: (a) Can technology help identify who the author is, and (b) Should the Times have published an anonymous op-ed? 

I have one more qualification: My Ph.D. in English and specialties in Elizabethan playwrights like Shakespeare and Ben Jonson.

In 2005, I published a piece in Inside Higher Ed in which I used my textual editing skills–developed to discern “fair” and “foul” copies of plays–to help identify a professor who kept leaving unflattering anonymous notes in the mailboxes of colleagues. Here’s what I wrote in the essay titled “Such Stuff As Footnotes Are Made On“:

You see, over time, each of us develops a distinct textual signature. We may be given to odd phrases, locutions and colloquialisms, such as “in regards to” or “clearly, it seems” or “in cahoots with,” as in, “In regards to his annual review, clearly, it seems, John Doe is in cahoots with the Dean.” Collect enough writing samples, and you can identify the likely source of such a sentence, just as you can discern a fair from foul excerpt of a Shakespearean play.

In this case, I took awkward locutions in the anonymous notes and ran them through thousands of emails on the university server. Bingo!

This is a popular application, the most famous of which concerned Shakespeare Professor Don Foster at Vassar College, known for outing journalist Joe Klein as the anonymous author of the 1996 book Primary Colors. 

A quick analysis of text in the anonymous memo concerns the use of the word “lodestar”–or navigation star, typically Polaris, used to guide a ship–by Vice President Mike Pence. People quickly glommed on to that, as in this video:

Not so fast. First of all, Pence issued a fierce denial that he was the author, stating in the Times:

“Anyone who would write an anonymous editorial smearing this president who’s provided extraordinary leadership for this country should not be working for this administration. They ought to do the honorable thing and they ought to resign.”

We expect denials, of course. However, there is a big difference these days in detecting linguistic fingerprints compared to when Foster and I did it years ago. Pence could have been set up by someone so technologically savvy that use of that word “lodestar” was deliberate.

That’s how digitally manipulative we have become.

Nonetheless, tech applications using machine intelligence have been used to detect authorship for the past decade. Case in point: When Harry Potter author JK Rowling wrote the novel The Cuckoo’s Calling under the pen name of Robert Galbraith, readers noticed linguistic similarities. An application was applied, and Bingo!–Rowling was identified.

Use of AI to detect the author of the memo typically can work around planted words like “lodestar” and provide statistical probabilities concerning who wrote the memo.

The next question is whether the Times should have published it. Here’s how the newspaper defended its decision:

The Times is taking the rare step of publishing an anonymous Op-Ed essay. We have done so at the request of the author, a senior official in the Trump administration whose identity is known to us and whose job would be jeopardized by its disclosure. We believe publishing this essay anonymously is the only way to deliver an important perspective to our readers.

Well, wait a minute. In an era of fake news, including ones promulgated to spoof the public and its opinion–such as this one about Michael Jordan resigning from the Nike Board because of the ad featuring Colin Kaepernick–journalism integrity “trumps” sensationalism.  So no, the Times should not have published the anonymous op-ed unless–and this is a BIG unless–someone so high in the administration wrote it that editors just could not resist the temptation to violate its own values. Here’s an excerpt about anonymous sources from the Times ethics code:

Because our voice is loud and far-reaching, The Times recognizes an ethical responsibility to correct all its factual errors, large and small. … We observe the Newsroom Integrity Statement, promulgated in 1999, which deals with such rudimentary professional practices as the importance of checking facts, the exactness of quotations, the integrity of photographs and our distaste for anonymous sourcing [my italics].

Now the Times faces another ethical dilemma. The Opinion Section operates apart from the News Division. Will one investigate the other? President Trump has suggested just that in this tweet:

That’s not as far-fetched as it might seem, and that statement is testament to just how crazy journalism along with politics has become in digital Crazytown.

The forthcoming edition of Living Media Ethics has chapters on manipulation, temptation and ethics codes, including anonymous sourcing and its dangers. Interpersonal Divide includes chapters on artificial intelligence and how it is being used in datamining and surveillance.

To Share or Not to Share: Racist Robocalls in Ethics and Technology Courses

I am the author of two books that foresaw how technology would be used to foment hate–Interpersonal Divide and Living Media Ethics. I am also a professor who teaches media ethics and technology and social change at Iowa State University. In the past, I could share distressing racial content with an appropriate trigger warning. But what to do when the content is so reprehensible that I as their instructor wish I had not viewed and heard it?

Here’s how I decided to handle it:

1. Do not share links. Rather than direct my students to sites containing the vile messages, said to be the work of white supremacists, I provided the above screenshot, which appears when “racist robocalls” is typed into Google (2 September 2018).

2. Do share multiple warnings. I have about two dozen African-American and Latinx students in my classes. Content of these robocalls affects them in a despicable manner. There’s another risk: One of the calls concerns the murder of Mollie Tibbetts, a young woman from our sister school, the University of Iowa. There’s no telling if someone in my classes knows her or her family.

3. Provide summaries. Those who wish to view both stories based solely on the screenshot above will have to voluntarily type “racist robocalls” and hit the links from their own smartphones, tablets or computers. They will have done this voluntarily after being forewarned by their teacher.

For discussion purposes, I summarized verbatim from mainstream media:

An assertion by a white gubernatorial candidate that Florida voters can’t afford to “monkey this up” by voting for his black opponent was widely viewed as a “dog whistle” to rally racists. If it were a dog whistle — and GOP candidate Ron DeSantis denies any racial intent against Democrat Andrew Gillum — then a jungle music-scored robo-call that has circulated in Florida is more akin to a bullhorn–“‘We Negroes’ robocall is an attempt to ‘weaponize race’ in Florida campaign, Gillum warns.” (The Washington Post, 9/2/2018)

An out-of-state white supremacy group has claimed responsibility for disturbing neo-Nazi robocalls using the murder of Mollie Tibbetts to push a violent, racist message in Iowa. Latino leaders said the calls are frightening the community and causing serious anxiety and fear in central Iowa. The 1 1/2-minute robocall begins by talking about Tibbetts’ death, saying she was “stabbed to death by an invader from Mexico. It goes on to call for the deaths of all 58 million Latinos in the United States–“Alarming neo-Nazi robocall hits central Iowa. (KCCI-Des Moines, 9/2/2018)

I feel good about this practice, even though I know my journalism colleagues may disagree, believing I am sanitizing the world. I also know most students may have no problem viewing and hearing the content. (Some might, and those are ones I am concerned about here.)

Educators may understand my method, focusing on the core concept in this exercise: Technology, once touted as our best hope to build inclusive communities, has been weaponized to destroy that idealistic goal.

Media ethics does call for judicious use of hate messages, especially when content emanates from a white supremacy group. You don’t want to promote such groups, even though summaries such as those above do provide a modicum of public exposure.

But good teaching entails understanding how students learn. The point is for both classes to recognize that technology is being used to strike fear in the populace. My lesson plan does not call for igniting everyone’s emotions to such extent that they leave class in fear of or angry about the world.

It is that fear, by the way, that those robocalls intend to trigger. Not so this time.

Student Debt Likely to Rise when Colleges Add Alexa

 

For the past decade, the first and second editions of Interpersonal Divide have tracked technology purchases by universities, and all of them resulted in the same unfortunate outcome, adding costs directly or indirectly to student debt.

And now we have three universitiesArizona State, Northeastern and St. Louis universities–making the same questionable decisions, only this time, inviting Alexa to interact with and engage students, all in the belief that this will enhance the student experience.
The decision has been controversial in some quarters. Consider Barbara Fister’s astute questions about privacy, published in Inside Higher Ed:

How much information is shared with college staff? How much is shared with Amazon? Can students purge information from its history? Can campus police or other law enforcement use recordings in an investigation? Can the policy be read in under 30 minutes and understood without a JD? What about people who didn’t agree to the policy but are captured as they visit the student who lives with an Echo? What do you do if some joker visits your room and orders up fifteen pizzas to be charged to your credit card?

I would add to that list of questions the potential violations associated with the Family Rights and Privacy Act, which protects student records. How about roommates or visiting parents and friends overhearing grades?

Cost is my main concern, as Alexa is programmed for profit rather than for pedagogy.

With student debt now at the $1.5 trillion mark, we continue to disenfranchise generations in the belief that consumer technology promotes education rather than corporate revenue.
I have been tracking this since 2008.

In 2005, in Inside Higher Ed, I was among the first to criticize Duke’s iPOD giveaway to 1,650 first-year students. In an essay titled “The Medium is the Moral,” I wrote: “Almost immediately, the ‘iPod First-Year Experience’ was dubbed a trendy gimmick, and the university went on the defensive, emphasizing that the Apple music player was the device of choice for a variety of educational tasks meant to keep pace with a mobile generation of learners.”

In 2008, in an essay titled “Harsh Realities About Virtual Ones,” I wrote: “Rising costs of a college degree at our wireless colleges and universities have resulted in increasing public scrutiny, student debt and budget models based on marketing rather than pedagogical concepts. Academe’s insatiable investment in virtual worlds, social networks and other consumer applications is a benchmark of how far we will go and how much money we will spend in the name of engagement.”

My Chronicle of Higher Education articles also addressed this issue throughout the years:

Facing the Facebook,” 23 January 2006. One of the first essays about Facebook use and misuse in academia.

Distractions in the Wireless Classroom,” 26 January 2007. Another glimpse into one of the earliest pieces about wireless technology undermining pedagogy.

Second Thoughts About Second Life,”  14 September 2007. A look into the consequences and expenses of institutions requiring or recommending classes participate in virtual worlds, adding to the cost of a college degree.

Second Life, Revisited,” 12 November 2007. A more in-depth look into issues like harassment in virtual worlds and whether institutions could be held liable.

Classroom Clickers and the Cost of Technology,” 5 December 2008. The added cost of requiring students to purchase what amounted to a TV-like remote control meant to foster engagement.

Reduce the Technology, Rescue Your Job,” 9 November 2009. How costs of corporate technology not only increase student debt but decrease funds for teacher salaries.

These are but a sampling of my critique of university administrators failing to understand the nature of technology. Developed by military to surveil and advanced by business to sell, consumer gadgets like Alexa do both simultaneously to reap profit for mega corporations.

We’ll close with an excerpt from Interpersonal Divide in the Age of the Machine that addresses this effect:

Throughout Interpersonal Divide we have argued that data define us more now than ever based on delivery systems that surveil and distract us around the clock. Students, in particular, often fail to deliberate the impact of overuse on their psyches as well as the digital consumerism that defines their mores and generation. Mediated communication isolates as much as it collaborates, insulates as it innovates. Without interpersonal engagement, Digital Natives will lose authentic connectedness that is the chief attribute of physical community.

Perhaps that paragraph should have been directed at university administrations that believe the hype of digital devices providing a virtual connection with their constituents.

As Barbara Fister astutely observers in her Inside Higher Ed essay, “Eventually, proponents hope they will be able to answer highly personalized questions – “what grade did I get on my chemistry test?” – and even become personal tutors. Because going to college is all about spending time in your room talking to a sentient hockey puck.”

App Mistakenly Matches Congress Members with Mugshots

An ACLU test of “Amazon Rekognition” facial identification tool falsely matched 28 members of Congress–disproportionally people of color–with criminal mugshots, documenting algorithmic bias, as covered in Interpersonal Divide in the Age of the Machine.

According to the ACLU, false matches included “six members of the Congressional Black Caucus, among them civil rights legend Rep. John Lewis (D-Ga.).” The organization is calling on Congress to join its efforts to halt law enforcement use of face surveillance.

You can read a full account of the test as covered by National Public Radio.

The ACLU test paired Rekognition software with a database of 25,000 arrest photos and then searched that database against photos of current members of Congress.

The results were no surprise to those who have read the new edition of Interpersonal Divide, which has covered Amazon and Facebook since 2004, noting how revenue-generating apps like “Rekognition” have changed ethical and social norms at home, school and work.

Here’s an excerpt about algorithmic racism from the second edition:

If you believe that institutional racism exists, that systems and organizations over time believe falsehoods about under-represented groups, then imagine the long-term consequences if such bias is coded in and programmed into machines. For instance, if machines compile data suggesting that a certain race, gender and age of people living in a given location may have a higher inclination for wrongdoing, and that person happens to wander into a wealthier section of the neighborhood, merchants equipped with apps might be prone to mistake innocent shoppers for potential shoplifters, depriving them of service or worse, accusing them of crimes.

Interpersonal Divide documents algorithmic racism across digital platforms and datasets, including decisions associated with social justice, such as determining whether inmates should be granted parole. (Penal boards in half the states use algorithms in parole hearings.)

The $19.95 book is available from Oxford University Press or  from online book sellers.