Pilot-less Passenger Jets in Your Future? Here are Vulnerabilities

Recently we learned about disturbing customer service incidents on U.S. passenger jets whereby paying customers were dragged bloodied off aircraft or challenged to fight near the cockpit.

These incidents are outgrowths of putting profit above passengers. But a much more ominous development, which CNN states is only a mere five years away, are remote-controlled pilot-less passenger jets.

This raises serious questions covered in the forthcoming book, Interpersonal Divide in the Age of the Machine (July 2017 release date). Would you rather fly on a jet piloted by machine whose algorithms are programmed for profit or on one flown by human whose adrenaline is programmed for survival?

Pilot safety is so amazing that the odds of a crash are, literally, astronomical. As the Economist reports, if you took a trans-Atlantic flight from London to New York every day, you could expect to go down once every 14,716 years.

To be sure, pilot error is a chief cause of airline crashes, with some statistics reporting a figure as high as 58% over time. What those statistics do not show, however, are occasions where pilot experience, intuition and critical thinking saved the day …. as well as passengers in aircraft.

Perhaps no person embodied that skill set more than Leonardo da Vinci who conceived the design for a flying machine in the 15th century. He is the iconic father of all pilots. It was da Vinci who said,

For once you have tasted flight you will walk the earth with your eyes turned skywards, for there you have been and there you will long to return.

Here is an article about the 10 most heroic pilot rescues in aviation history, based on those non-algorithmic human skill sets of experience, intuition and critical thinking.

Few people ever attach the word “heroic” to a machine, unless, of course, you mean the comic art drawing application, HeroMachine3.

Currently some newer passenger jets are computerized to such extent that on some rare occasions, a pilot has to rescue the aircraft from machines that go “psycho” during flight. Last week the Sydney Morning Herald published such a story, titled  “The untold story of QF72: What happens when ‘psycho’ automation leaves pilots powerless?

The video accompanying the story shows how a machine believed the plane to be hurdling at an unbelievable almost vertical angle at an incredible amount of speed–flight incapable of happening, by the way–which the computer had not programmed into its algorithms.

Yes, this is an exceptionally rare case, but it serves a point about machines. When they go haywire, humans are needed to set programming aright again.

Wary, yet, about traveling as a passenger in a pilot-less jet? You should be.

In the end, profit-minded corporations will make the determination about aircraft with or without pilots. They most certainly will retain armed undercover air marshals to guard against terrorism.

But will they factor into the cost analysis the terrorist hacker who can infiltrate the cabin’s computer system with a virus that sends the airplane into a tailspin?

If we know anything in the age of the machine, it is this: any computer anywhere can be hacked. Let’s just hope that this doesn’t occur in the friendly skies.

New App Tracks, “Flips Off” Phones in Class

Earlier today I received this email at my workplace, the Greenlee School of Journalism and Communication at Iowa State University:

Are cell phones a distraction in your class?

If you’re like most educators today, you’ve probably noticed that attention is in short supply. Your lectures are frequently challenged by cell phone distractions and multitasking students, and you’re facing decreased participation, collaboration, and thoughtful discussion as a result. …  That’s why we created Flipd. Used by thousands of educators and students across North America, Flipd is a simple low-tech solution to a major high-tech problem.

Here’s how the app works: Teachers register their classes with the company, which sends a message to students to flip off their phones during lecture. (They can use them in case of emergencies, of course.) The application sends data to the teacher about any student that violates the rule and uses the phone. If a student used the phone for 15 minutes during class, that person’s data would appear on the teacher’s dashboard.

You can read all about the application by clicking here.

The application is meant to discourage use of smartphones in class and mitigate that distraction.

I appreciate what this application is trying to accomplish. I have been writing about digital classroom distractions for many years.  Here’s an essay titled Distractions in the Wireless Classroom,” published in 2007 in the Chronicle of Higher Education.  I prophesied, “As more and more classrooms go wireless, technology warnings on syllabi soon will be as standard as the ones about cheating (which laptops also facilitate).”

Well, that certainly came true.

Distraction is so bad in some classes that professors make their students sign a legal document promising not to use smartphones during lecture. Maybe they need Flipd.

In my media ethics class at ISU’s Greenlee School, I don’t restrict cell phone use but consign several rows of seats in the back of the class so that students can text to their hearts’ content. No, I haven’t given up. Some students are addicted to their phones, and they need to learn a lesson–not about ethics but technology.

Typically students in those texting back rows do poorly on exams. After the midterm, they usually complain about grades. I use those occasions for a “teaching moment” and explain the high cost of distraction (and tuition). Most then stop using their phones independently.

The logic is simple. At the workplace, bosses won’t be using Flipd or signing contracts demanding employees not use smartphones during business hours. The best way to eliminate distractions is to understand the consequences of them.

That’s what Interpersonal Divide in the Age of the Machine does in Chapter Five about use of technology at school.

 

 

 

Algorithms, Evil & Augmented Reality: The Desensitization of Facebook Users

For more than two hours thousands of people viewed an uploaded Facebook feed by murderer Steve Stephens who shot 74-year-old grandfather Robert Godwin, Sr., in Cleveland, a horrific killing that occurred during the same week that Facebook CEO Mark Zuckerberg gave a keynote address at a developer’s conference in San Jose, Calif.

Two scenes in two very different cities symbolizing the age of machines.

In one of the best perspectives on the Facebook tragedy, Erika D. Smith, associate editor at the Sacramento Bee, wrote:

Police initially thought that Stephens, who killed himself in Pennslyvania on Tuesday, had broadcast the shooting on Facebook Live, the service that lets users to share their surroundings in real time. It turns out he didn’t; he recorded it on his phone and uploaded it. That’s horrific enough. But the day is almost certainly coming when someone really will commit murder live on Facebook, a social network with 1.86 billion active users. When that happens, I’m not sure the Silicon Valley giant or its peers will be ready for it.

Smith, whose hometown is Cleveland, says those “enterprising geeks in the Golden State” have failed to account for “the dark parts of human nature,” rolling out apps and online services in the mistaken belief that these will create a Utopian society. People like Stephens and so many others in our increasingly desensitized culture are bent on or have accepted as reality a Dystopian world where evil or deception reigns.

At his developer’s conclave, Zuckerberg briefly addressed the Cleveland slaying. “We have a lot of work, and we will keep doing all we can to prevent tragedies like this from happening.” He expressed condolences to the Godwin family, switching topics about future innovation, including a riff about “augmented reality.”

We are living in that augmented reality, as far as the human condition is concerned. There is goodness and evil in the world, and algorithms cannot discern the difference, especially since social media like Facebook, YouTube and other platforms promote every conceivable act of violence–admittedly, often in animated video game formats; but those formats and applications were the precursors of what’s coming next. This is OUR reality: Machines can correlate what, when, where and how violence may be occurring online, but then must identify actual v. augmented unconscionable acts in a digital maze that comprises all manner of real and virtual crimes against humanity. All this violence is beyond algorithmic grasp.

Ponder that for a moment. We are drowning in video, digital and animated violence, often graphic in nature, and many of us have become desensitized to it.

There are many social media examples. An especially horrifying one occurred in Chicago on Facebook when 36 people watched a live feed of a sexual assault on a 15-year-old girl. This report of the incident correctly cites the desensitizing impact of digital violence on viewers to explain why no one watching the sex crime notified Facebook.

And that is the moral of the Stephens shooting this week. The multitudes who viewed the killing shared or felt compelled to comment and add an emoticon. But too few were sickened enough to notify Facebook immediately.

In the end, Zuckerberg’s machines have no answer to the conundrum. One reason is technical; another is cultural. Technically, the sheer volume of Facebook content–more than 300 million uploaded photos per day alone–surpasses the ability of algorithms to identify evil-doers like Stephens. If big data could do this, there would not be 83 million fake profiles on Facebook. Culturally, Facebook embraces its brand–an insistent one–of new apps and features that data-mine users’ every consumer whim and continually encourage users to share, like, add content and increase traffic.

As a result, Facebook has to rely on desensitized users to keep the platform safe by reporting evil acts that algorithms cannot detect–textbook Utopian Silicon Valley vision. At the same time the platform is touting augmented reality offering new vistas to better humankind and add more volume and traffic.

Facebook is a tale of two faces.

Technology is great, but are we prepared for the consequences?

 Michael Bugeja

Michael Bugeja, professor and director of ISU’s Greenlee School of Journalism and Communication, explores what might happen if we allow machines to dictate our lives. He says we need to educate ourselves on media literacy and the way in which we use technology — asserting ourselves over the technology. Story by Angie Hunt, photo by Christopher Gannon, Iowa State University News Service.

AMES, Iowa – Most Americans have some form of digital technology, whether it is a smartphone, tablet or laptop, within their reach 24-7.

Our dependence on these gadgets has dramatically changed how we communicate and interact, and is slowly eroding some of our core principles, said Michael Bugeja, professor and director of the Greenlee School of Journalism and Communication at Iowa State University. Bugeja is not advocating against technology – in fact, he relies on it for his work and personal life – but he says we need to recognize the possible ramifications before it is too late.

In his forthcoming book, “Interpersonal Divide in the Age of the Machine,” Bugeja explores what might happen if we allow machines to dictate our life. Those machines range from smartphones to robotics to virtual reality. Bugeja theorizes that because of our reliance on machines, we will start to develop the universal principles of technology, such as urgency, a need for constant updates and a loss of privacy.

“We are losing empathy, compassion, truth-telling, fairness and responsibility and replacing them with all these machine values,” Bugeja said. “If we embed ourselves in technology, what happens to those universal principles that have stopped wars and elevated human consciousness and conscience above more primitive times in history?”

Need for media and technology literacy

Bugeja warns of the dangers associated with adopting these values. The proliferation of fake news is just one example of how this shift is already influencing our culture. Technology provides a continuous connection to our social media feeds, which has become a popular source for news for many Americans. However, social media tends to cultivate news stories that reflect our individual beliefs and values – not a broad spectrum of viewpoints – and is an easy way for fake news stories to spread, Bugeja said.

“The business of journalism is already feeling the effect of living in a world of correlation without causation,” he said. “We understand what happened and how it happened, but we don’t understand why it happened.”

That’s why Bugeja wants colleges and universities to require students take media and technology literacy courses. He says it is important that students know where to go to find credible news stories, and open their minds to information from a variety of sources, not just those that confirm what they already think or believe.

“We need these courses so that people know where to go for facts and how to deal with technology. If you do not assert yourself over technology, it will assert itself over you and you will be doing what the machine asks you, rather than you telling the machine what to do,” Bugeja said.

There is no easy short-term fix for the future, Bugeja said, which is why we need to temper our use. He says the long-term solution is through education.

Machines are not human

It is not just the philosophical and intellectual consequences that have Bugeja concerned, but also the impact of technology on business, behavior and everyday activities. Business and industry increasingly rely on machines or robots to do the jobs of humans. Bugeja says this shift can improve efficiency, safety and the company’s bottom line, but he questions what will happen to those individuals who lose their jobs to machines.

Working at a university, Bugeja has witnessed how machines have altered behavior in the classroom, dining hall or when walking across campus. Technology is a distraction that keeps students from focusing on their studies and limits interpersonal interactions, he said. In much the same way, the temptation of responding to an alert from social media or notification of a text message while driving has increased safety concerns.

“We introduce new gadgets by saying they will make our lives better, which is true, but there are also dangers,” Bugeja said.

The purpose of his latest book is to raise awareness about the dangers of living in a world dominated by machines. He challenges readers, just as he does with students in his class, to balance their use of technology and not feel pressured to respond immediately to an email or text message. The book, published by Oxford University Press, will be available in July.

What Happens When a Machine Boots a Passenger

While millions of people viewed this disturbing scene of a passenger being forcibly removed from a United Airlines flight, few realized that a machine was at the bottom of this.

According to NBC News, “The airline said it had asked people to give up their seats for four crew members who needed to fly. When not enough people volunteered, despite being offered compensation, the airline used an algorithm to select people who then had to give up their space.”

How was that algorithm programmed?

Bloomberg news reports that computers make decisions “based on a fare class, an itinerary, status in its frequent flyer program, ‘and the time in which the passenger presents him/herself for check-in without advanced seat assignment.’”

The New York Times focused on how machines have not improved airline travel, stating:

Everything about United Flight 3411 — overselling, underpaying for seats when they are oversold, a cultish refusal to offer immediate contrition, an overall attitude that brutish capitalism is the best that nonelite customers can expect from this fallen world — is baked into the airline industry’s business model. And that business model has been accelerated by tech.

The computer also didn’t take into account that the selected passenger was of Asian heritage, causing crisis management for United in its efforts to make inroads in China. The BBC reported that with a quote from one user alleging the man’s treatment was “racial profiling.”

That may not be the case. What is certain, however, is algorithms making decisions not only to remove passengers but to overbook flights.

TROLL PATROL: AI and FB Safe Zones

troll

The Pew Research Center has issued a new report which in part discusses use of artificial intelligence to help create safe zones for social media users. Titled “The Future of Free Speech, Trolls, Anonymity and Fake News Online,”  the center states:

Many experts fear uncivil and manipulative behaviors on the internet will persist – and may get worse. This will lead to a splintering of social media into AI-patrolled and regulated ‘safe spaces’ separated from free-for-all zones. Some worry this will hurt the open exchange of ideas and compromise privacy

The Center, working with Elon University, also conducted a massive survey of technology, corporate and government experts to assess the manipulation of public opinion caused by hacking social media. They were asked to respond to this statement: “In the next decade, will public discourse online become more or less shaped by bad actors, harassment, trolls, and an overall tone of griping, distrust, and disgust?”

The report notes that anonymity, a key component of the early internet, “is an element that many in this canvassing attributed to enabling bad behavior and facilitating ‘uncivil discourse’ in shared online spaces.”

This especially has impacted journalism.

Michael Bugeja, author of Interpersonal Divide, was one of the first to call attention to this issue. In this ABC News report, he is quoted as stating, “If you want enlightened conversations on your site, people have to use their real names” because anonymity is troll camouflage

However, Bugeja is against the use of artificial intelligence to create “safe zones” on social media as that may curtail speech and First Amendment protections. “If we put that power in the digital hands of artificial intelligence,” he adds, “we will conform to what programmers believe is ‘safe,’ which opens up all manner of constitutional and philosophical questions.”

The answer is not a Troll Patrol but tempered use of social media (short term) combined education on First Amendment freedoms and media and technology literacy (long term).

Your Facsimile World

Image courtesy of Wikiart, copyright Enrico Donati, sculpture “Evil Eye” 1946

There’s nothing wrong with experiencing facsimile. But facsimile is not a substitute for experience.

The forthcoming edition of Interpersonal Divide in the Age of the Machine (Oxford Univ. Press, 2017) prophesies a “World Without Why“; but that is not the end of it: We are entering a “facsimile world” with the introduction of smartphone virtual reality.

In introducing a smartphone with a built-in virtual reality camera, Tech Worm  states:

You’ll probably never go to Mars, swim with dolphins, run an Olympic 100 meters, or sing onstage with the Rolling Stones. But if you own a Virtual Reality headset, you can do all the above things without leaving your sofa.

As the author of  the first edition of Interpersonal Divide (Oxford Univ. Press, 2005),  I have been tracking the facsimile world for more than a decade. For instance, I wrote several pieces  about avatars in Second Life, one of the first virtual reality worlds on the web. Many colleges were conducting classes on the platform, and that worried me. My focus was not on new experiences, such as the vicarious feeling of flying from one location to the next, but on deviant behaviors that students could encounter landing on an unknown site and encountering strangers there.

In this piece, published in The Chronicle of Higher Education, I wrote:

We have enough trouble dealing with violence, assault, and sexual harassment in the real world, but few of us — even campus lawyers — know how the law applies in virtual realms vended by companies whose service terms often conflict with due process in academe.

In a follow-up article, again in the Chronicle, I recommended all virtual reality games create terms of service to mitigate the incidence of avatar harassment, assault, racism, homophobia and other inappropriate content.

In another article titled “Avatar Rape,” published in Inside Higher Ed, I argued that avatar harassment and sexual assault remain controversial issues because educational institutions hosting virtual worlds are not accustomed to dealing with — or even discussing — digital forms of these distressing behaviors.

Now, with a VR headset and a smartphone, the future of graphic encounters–including all forms of illicit behaviors–will change, along with our psyches.

Users will move from vicarious characters manipulated by keyboard and mouse to facsimile ones that have the feel, if not the substance, of real life.

Case in point: Tech Radar reports that the free site “Pornhub” is creating a new category for every conceivable form of sexual behavior. According to the post, “Pornhub has such faith in VR that in addition to launching the new category, the site is also giving away 10,000 headsets to get early adopters on board. ”

The addictive quality of smartphone VR has yet to be measured. But society may be moving from digital marijuana to heroin in record time.

My concerns transcend sex and violence. With a VR headset, you can dream of all the things you had hoped to experience in your bucket list. Swim with dolphins. Climb a mountain. Visit the Sistine Chapel, Mount Olympus, Yellowstone National Park … and never actually do anything. And these are only tourist-type facsimiles. You can imagine the range of personal experience–the good, bad and ugly–in which consumers are going to indulge.

This is not to say that facsimile cannot enlighten us. A colleague of mine who teaches virtual reality at Iowa State University notes that a person might take up the cause of a social issue, such as civil rights, by experiencing what it is like to part of a protest. All that is true, of course.

But the reality, or virtual reality, I should say, can just as easily seduce us from the difficult work of actual achievement and participation. The machine is the grand enabler. Gratification theory will have to be rewritten.

Finally, all of this is not new. Technology has always provided facsimile. Consider this analogy from the literary era of the 1970s when photocopying machines replaced mimeographs. (Here’s a link for those who never heard of mimeographs.) Teachers, in particular, felt that they had read an article simply because they had photocopied it.

Now, with VR technology, people will have felt that they have lived a life simply because they donned a headset.