Author: Michael Bugeja

Ralph Nader: Boeing MAX 8 crashes “a harbinger” of what is to come

Interpersonal Divide was among the first to question technology overriding common sense in Boeing Max 8 crashes. Now consumer advocate Ralph Nader makes the same argument, dubbing the airline disasters a product of “algorithmic arrogance” in autonomous systems.

Last month Interpersonal Divide noted that technology was not only suspect in the failure of two Boeing 737 MAX 8 aircraft; it also played a role in pilot training with a 56-minute online iPad lesson. Now Boeing software combined with inadequate training have been cited as causes in  the crashes of Lion Air and Ethiopian Airlines, resulting in the deaths of some 300 people.

Boeing and the Federal Aviation Administration showed an appalling lack of common sense after the second Ethiopian Airlines crash. We noted: “One aircraft falling from the skies might have gotten the FAA’s attention; a second similar craft doing the exact same thing–pilots struggling to aright their jet–should have set off alarms. Nevertheless, the FAA initially didn’t act after countries around the globe had grounded the MAX 8 after the second crash. This might have occurred because the Administration is ‘data-driven’ and data from the black box of the Ethiopian Airlines crash was not immediately available.”

Boeing Max 8 aircraft are grounded as the company updates its safety software, again relying on technology to fix what Nader believes is faulty engineering.

This is an example of what Ralph Nader calls “algorithmic arrogance.” In his interview with NPR, he states:

You know, this is a harbinger. You’ll be covering a lot of this in medical technology, autonomous weapons, self-driving cars. It’s the arrogance of the algorithms, not augmenting human intelligence but overriding it and replacing it. The significance of this Boeing disaster is that it can teach us some very important lessons about maintaining human intelligence and not ceding it to autonomous systems that have no moral base and no intuition.

In 2007, Interpersonal Divide author Michael Bugeja contributed a chapter titled–“Digital Ethics in Autonomous Systems“–in Lee Wilkins and Clifford G. Christians The Handbook of Mass Media Ethics.

Here is an excerpt, citing French-Maltese philosopher Jacques Ellul’s dictum that technology overrides human intellect, altering any system in a deterministic manner; “it realizes itself”:

In other words, apply technology to the economy, and the economy henceforth is about technology (think NASDAQ). Apply it to politics, and politics henceforth is about technology (think Kennedy-Nixon debates). Apply it to education, and education henceforth is about technology (think Sesame Street). Apply it to journalism, and journalism henceforth is about technology (think convergence). Moreover, because technology is autonomous and independent of everything, it cannot be blamed for anything.

Ralph Nader is challenging that last assumption, calling for a recall of all Boeing Max 8 aircraft, just like auto companies do. “They’ve got to go back to the drawing board, a clean sheet design of a new plane.”

A new edition of Wilkins and Christian’s The Handbook of Mass Media Ethics is in process with an updated Bugeja chapter, to be published by Routledge/Taylor & Francis. Wilkins and Christians rank among the top media ethicists. Wilkins also focuses her research on media coverage of the environment and hazards and risks. Christians is internationally known for his research in the contemporary study and philosophy of ethics and technology.

Detective to Discuss Case of Subway Pitchman Jared Fogle

Indiana State Police Detective Kevin Getz, a former journalism student of Interpersonal Divide author Michael Bugeja, will visit classes at Iowa State University to discuss his role in the 2015 arrest of Subway pitchman Jared Fogle on child pornography charges.

Indiana State Police Detective Kevin Getz will present a case study to Iowa State students in Media Ethics and in Technology and Social Change, exploring investigative methods and forensic analysis in the arrest of former Subway spokesman Jared Fogle and former Jared Foundation Executive Director Russell Taylor.

Getz will discuss details of the arrest, including how he and other authorities prevented additional crimes that led to the rescue of 14 children.

In 2014, Getz was contacted by a woman whom Taylor had befriended, sharing text messages with her containing disturbing sexual content. The woman decided to contact authorities when Taylor asked if he could send her child pornography.

Based on those messages, Getz and other authorities arrested Taylor who was sentenced to 27 years in prison.

That arrest eventually led Getz and authorities to Fogle, sentenced to 15 years in prison on charges associated with child pornography and sexual conduct involving minors.

Interpersonal Divide contains sections about digital technology and crimes against children. Here’s an excerpt:

Since the 1990s, Internet has been associated with the wide-spread distribution of child pornography. At the National Strategy Conference on Combating Child Exploitation, former Attorney General Eric Holder, Jr., spoke about the explosion of child pornography crimes because of Internet distribution through online communities and networks. Holder noted that the Internet “provides ground for individuals to create, access, and share child sexual abuse images worldwide at the click of a button,” with images “readily available through virtually every Internet technology including websites, email, instant messaging/ICQ, Internet Relay Chat (IRC), newsgroups, bulletin boards, peer-to-peer networks, and social networking sites.”[i] Increasingly, however, tweens, teens and young adults are contributing to the exploitation by sexting each other, sharing nudity through message, photo, email or social network.  … The more students learn about technology, the more they will use it responsibly. The more distracted they are, the greater the risk.
[i] Eric Holder Jr., “Child Pornography,” National Strategy Conference on Combating Child Exploitation in San Jose, California, May 19, 2011; https://www.justice.gov/criminal-ceos/child-pornography

Detective Getz will discuss how text messages and other digital content led to arrests in the Fogle case.

Getz is a 1990 graduate of Ohio University’s Scripps School of Journalism. He joined the Indiana State Police in 1993. He also served in the Criminal Investigation Division before his current assignment with the Indiana Crimes Against Children Unit. Getz and his wife Deborah have three children, Elizabeth, Thomas and Katie.

Technology overrode common sense in Boeing MAX 8 crashes

Over-reliance on technology and questionable FAA oversight were linked to two crashes that killed more than 300 people. The second crash occurred 11 days after the Seattle Times questioned Boeing about safety flaws.

Technology was not only suspect in the failure of two Boeing 737 MAX 8 aircraft; it also played a role in pilot training.

In the “Today” video above, pilots reportedly received training in a 56-minute online iPad lesson about an aircraft whose faulty software also was suspected in causing the crashes of the Lion Air in October and Ethiopian Airlines earlier this month.

Worse, some pilots were not informed about certain safety software systems installed on their planes. According to Politico, U.S. pilots had complained at least five times about controlling the aircraft during critical stages of flight.

A particular distressing factor in the two crashes concerned additional safety features that required a pricey upgrade in airlines purchasing the MAX 8, according to the New York Times: “As the pilots of the doomed Boeing jets in Ethiopia and Indonesia fought to control their planes, they lacked two notable safety features in their cockpits. One reason: Boeing charged extra for them.”

The Times noted that upgrades typically do not involve safety–more bathrooms, for instance; in the aftermath of the crashes, Boeing will not charge extra for one of those features, in an attempt to get the MAX 8 airborne again.

Eleven days before the second crash, Seattle Times reporter Dominic Gates had informed Boeing about questions concerning the power of the flight control system, designed to push the nose of the aircraft down to avert a stall. He had also learned a system reset function that could override a pilot’s response, causing the plane’s nose to keep pushing downward.

His investigative report also disclosed failed oversight by the Federal Aviation Administration:

The FAA, citing lack of funding and resources, has over the years delegated increasing authority to Boeing to take on more of the work of certifying the safety of its own airplanes.

His report illustrates the importance of fact-based journalism in a case where common sense–a distinctly human trait–was overridden by machines. This applies not only to inadequate online training, especially on tablets with insistent pinging and notifications, but also to the FAA that allowed the MAX 8 to fly after the Ethiopian Airlines crash.

One aircraft falling from the skies might have gotten the FAA’s attention; a second similar craft doing the exact same thing–pilots struggling to aright their jet–should have set off alarms.

Nevertheless, the FAA initially didn’t act after countries around the globe had grounded the MAX 8 after the second crash. This might have occurred because the Administration is “data-driven” and data from the black box of the Ethiopian Airlines crash was not immediately available.

In an interview with NPR, Peter Goelz, former managing director of the National Transportation Safety Board, said “the FAA has really prided itself on being a data-driven organization, that they don’t make ad hoc decisions” on “anecdotal evidence.” He added, the FAA has “a close working and regulatory relationship with Boeing.”

That relationship may be too close. Last week the FBI opened a criminal investigation into the certification of the Boeing 737 MAX 8.

Interpersonal Divide in the Age of the Machine (Oxford Univ. Press 2018) questions overuse of technology at the expense of human intelligence and common sense.

Those were key factors in the MAX 8 crashes.

 

New Zealand mosque attack shows need for Congress to regulate Facebook

Des Moines Register

Copyright 2019 Des Moines Register

Some 13 years ago, I alerted the higher education community about the misuse of a new social medium, noting that 20,247 of 25,741 students at Iowa State University were already registered, although many faculty and administrators had never heard about it.

The piece, “Facing the Facebook,” appeared in The Chronicle of Higher Education. Here’s an excerpt:

“On many levels, Facebook is fascinating — an interactive, image-laden directory featuring groups that share lifestyles or attitudes. Many students find it addictive, as evidenced by discussion groups with names like ‘Addicted to the Facebook,’ which boasts 330 members at Iowa State. Nationwide, Facebook tallies 250 million hits every day and ranks ninth in overall traffic on the Internet.”

In late 2005, when I researched Facebook for my Chronicle piece, the platform boasted 5.5 million users. In 2012, Facebook surpassed 1 billion users. It tallied 2.32 billion active users at the end of 2018. If you count the company’s WhatsApp, Instagram and Messenger, that figure rises to 2.7 billion.

In sum, the company’s total registered users are about the size of the populations of China and India combined.

That’s a lot of power. That’s a lot of profit.

For the rest of the op-ed, click here or visit: https://www.desmoinesregister.com/story/opinion/columnists/iowa-view/2019/03/19/new-zealand-mosque-attack-shows-need-congress-regulate-facebook/3205769002/

UPDATE: Wiretap v. Photoshop in college admissions scandal

Photoshopped stock images of athletes–with applicant faces alongside fake profiles–were used in the college admissions scandal to gain access to elite colleges. Parents paid millions to an organization whose digital methods were no match for modern-day wiretap technology. Some of those parents are headed to jail.

An account of the cheating scandal, published in Inside Higher Ed, has led to 50 indictments involving non-athlete applicants, bribed coaches and rigged SAT/ACT scores to ensure acceptance at elite and competitive colleges.

Among those indicted were actresses Felicity Huffman and Lori Loughlin, along with wealthy parents in law and business. They paid millions in a rigged system so that their children could take slots that other applicants deserved based on grades, test scores and/or athletic abilities.

Huffman was sent to prison for 14 days. She was incarcerated in a facility that Forbes has described as “Club Fed,” noting: “Its prisoners enjoy a mild climate, access to e-mail, a chance to reduce their sentence via on-site rehab and occupational programs like horticulture and cosmetology.”

Loughlin is fighting her charges and is expected to do more jail time if convicted.

Federal investigators used wiretaps to gather evidence against the accused and the scheme’s mastermind, Rick Singer, who ran Edge College & Career Network and a foundation created to conceal bribe money.

Not all cases involved non-athletes taking recruitment slots reserved for worthy applicants with athletic ability. However, Division I coaches from Georgetown, Stanford, Texas, UCLA, USC, Wake Forest and Yale were charged in the scheme. Use of recruiting slots reportedly was one additional method of ensuring acceptance.

The lesson here, however, concerns the sophisticated technology of modern-day wiretap in federal investigations. Cornell Law School lists these methods:

Examples of electronic surveillance include: wiretapping, bugging, videotaping; geolocation tracking such as via RFID, GPS, or cell-site data; data mining, social media mapping, and the monitoring of data and traffic on the Internet. Such surveillance tracks communications that falls into two general categories: wire and electronic communications. “Wire” communications involve the transfer of the contents from one point to another via a wire, cable, or similar device. Electronic communications refer to the transfer of information, data, sounds, or other contents via electronic means, such as email, VoIP, or uploading to the cloud.

Technology used in the cheating scandal was easily detected. In some cases, ordinary computer programs were used to manipulate images and create fake digital content. Here’s an example obtained through wiretap:

Some universities have expelled students in the cheating scandal; others have not. Reports indicate that many such students did not realize what their parents had done to get them into top programs.

Reportedly, Lori Loughlin and her husband Mossimo Giannulli, agreed to pay $500,000 in bribes to have their two daughters, Isabella, 20, and Olivia, 19, “designated as recruits to the USC crew team — despite the fact that they did not participate in crew — thereby facilitating their admission to USC.”

In one case, prior YouTube posts by Lori Loughlin’s daughter Olivia Jade Giannulli caused a stir on social media shortly after the scandal broke.

 

Latest reports indicate that Gianulli is not planning on returning to USC and has taken a break from social media, apart from wishing her mother a happy birthday and one deleted an Instagram post with her giving media the middle finger.

 

Interpersonal Divide in the Age of the Machine includes chapters on use of technology to create fake and misleading content and the ramifications of that at home, school and work.

Here’s an excerpt:

Media and technology have always manipulated self-image, values, and perception. However, the current high-tech era is unique because of the power of the electronic tools, the time that we spend using them, the tasks that we relegate to machines out of convenience, and the influence of the corporations that manufacture them. The net result is a blurring of boundaries. The real and virtually real—including augmented reality, or computer enhanced views of life and locale (as in GPS technology)—have blended to such degree that we cannot always correctly ascertain what is genuine and enduring from what is artificial and fleeting. That type of confusion comes with its own set of interpersonal and societal consequences, complicating our lives and relationships, not because we are necessarily dysfunctional, but because we have forgotten how to respond ethically, emotionally, and intellectually to the challenges, desires, and opportunities of life at home, school and work.

The societal consequences in the college admissions scandal is a prime case of privileged people failing to respond ethically, emotionally and intellectually. Now they face the consequences. As Boston US Attorney noted, “There can be no separate college admission system for the wealthy, and I’ll add there will not be a separate criminal justice system, either.”

Live-link hospital robot delivers death prognosis

Family of Ernest Quintana was angered that a doctor used a robot’s live video screen to say he could do nothing else for the dying patient. His granddaughter says using the machine lacked compassion. That, and a whole lot more.

Annalisia Wilharm expected a doctor to enter her grandfather’s hospital room at Permanente Medical Center in Fremont, Calif. Instead, she told CNN, she saw a nurse wheel in a robot with a physician delivering the news via a video screen. She didn’t know the doctor or where he was when he recommended a morphine drip for 78-year-old Ernest Quintana, who died the next day.

Wilharm told CNN: “We knew that we were going to lose him. Our point is the delivery (of the news). There was no compassion.”

The hospital issued a statement noting the video technology allowed a live conversation to take place and that a nurse was in the room to explain how the machine functioned. The hospital reportedly does not encourage the use of technology for patient-doctor interactions and acknowledged the incident fell short of the family’s expectations.

Interpersonal Divide in the Age of the Machine (Oxford, 2018) has prophesied increasing use of robots in medicine, noting that they can assist physicians with procedures. However, use of a live video link through a robot-appearing machine is neither compassionate nor practical when terminal prognoses are delivered.

The theme of Interpersonal Divide is based in part on the philosophy of French-Maltese social critic Jacques Ellul: Technology changes everything it touches, without its self being changed much at all. Relying on a nurse to explain the technology makes that person an IT rather than medical expert.

Finally, it doesn’t matter that the doctor delivered the news via a live link because the medium in this case was the message. The robot-looking machine asserted its presence in McLuhan fashion.

That’s the lesson for Permanente Medical Center.

Zuckerberg Resurrects Value of Privacy: Silly Us, We Thought It Was Dead

In 1999, CEO Scott McNealy of Sun Microsystems prophesied the future with this quote: “You have zero privacy anyway. Get over it.” Facebook CEO has tried to get under and around privacy, earning billions in the process. Now he wants to resurrect it, potentially threatening news media business models.

Mark Zuckerberg plans to integrate Facebook, Instagram, WhatsApp and Messenger so that users can text each other across those platforms, creating a “digital living room” whose chief attribute would be privacy.

In a lengthy blog post, Zuckerberg wrote:

As I think about the future of the internet, I believe a privacy-focused communications platform will become even more important than today’s open platforms. Privacy gives people the freedom to be themselves and connect more naturally, which is why we build social networks.

He laid out this vision:

  • Private interactions. People should have simple, intimate places where they have clear control over who can communicate with them and confidence that no one else can access what they share.
  • Encryption. People’s private communications should be secure. End-to-end encryption prevents anyone — including us — from seeing what people share on our services.
  • Reducing Permanence. People should be comfortable being themselves, and should not have to worry about what they share coming back to hurt them later. So we won’t keep messages or stories around for longer than necessary to deliver the service or longer than people want them.
  • Safety. People should expect that we will do everything we can to keep them safe on our services within the limits of what’s possible in an encrypted service.
  • Interoperability. People should be able to use any of our apps to reach their friends, and they should be able to communicate across networks easily and securely.
  • Secure data storage. People should expect that we won’t store sensitive data in countries with weak records on human rights like privacy and freedom of expression in order to protect data from being improperly accessed.

The New York Times analyzed these functions, noting that they were proposed following years of privacy invasion and scandal.

Foreign agents from countries like Russia have used Facebook to publish disinformation, in an attempt to sway elections. Some communities have used Facebook Groups to strengthen ideologies around issues such as anti-vaccination. And firms have harvested the material that people openly shared for all manner of purposes, including targeting advertising and creating voter profiles.

The Columbia Journalism Review speculated on a motive for Zuckerberg resurrecting privacy as a core value, questioning whether “hateful or violent content will soon appear in private rather than public messages,” meaning the company no longer would be liable in any privacy-invasion litigation. “The latter question has already come up in India, where much of the violence driven by WhatsApp has been fueled by messages posted in private groups.”

The magazine also noted that these new steps to secure privacy for users might impact journalism, affecting distribution of news and data-mining through social media, a continuous Facebook surveillance and selling feature. That threatens ad revenue, especially since media business models have been built around Facebook’s algorithms.

Interpersonal Divide has covered Facebook since its inception in the first and second editions, with particular attention to privacy and datamining. Here’s an excerpt:

As such, billions of users worldwide may been seen as exploited workers who spend hours each day allowing their personal information to be mined and sold and who provide content that engages others and generates more data for profit-minded creators and stockholders of Facebook, Twitter, LinkedIn, Instagram, and other popular venues.

The text also discusses how Facebook disseminated fake news associated with the 2016 presidential election.

The author’s latest work, Living Media Ethics (Routledge, 2019), blames Facebook for disseminating fake news as avidly as fact-based journalism, threatening democracy because fewer people cipher real from fabricated reports. Here’s an excerpt:

Social media, especially Facebook, has become the primary disseminator of false news reports, prompting the company and FactCheck.org to partner in an attempt to flag fabricated “news.” The initiative was triggered by false news during the 2016 presidential campaign.[1] FactCheck.org recommends that reporters and viewers consider the source of information, read content carefully before jumping to conclusions, and verify the reputation of the author or group disseminating stories.

FactCheck cites these warning signs:

  • Did a reader or viewer send you a tip and social media link based on a bias that you both may share or that your media outlet has supported in the past?
  • Is the headline or title of a report sensationalized with content about what might occur hypothetically if a sequence of events takes place?
  •  Is the content of an alleged news report undated or based on events that might have happened in the past, falsely depicted as happening in the present?

[1] Sydney Schaedel, “How to Flag Fake News on Facebook,” FactCheck.org, July 6, 2017, http://www.factcheck.org/2017/07/flag-fake-news-facebook/