Author: Michael Bugeja

When “smart” becomes “snoop”: Your TV is watching you

New TV models recognize you and record what you do in your home–including bedrooms–feeding data to advertisers. Worse, hackers can access your devices, cyberstalk and blackmail you. They can even alert burglars at times you are apt to be away from your home. The FBI just put out a warning.

In 2005, the first edition of Interpersonal Divide stated: “We get the feeling on the other side of our computer that no one is looking back at us through windows, and yet, everybody could be.”

The second edition took that a step further, warning about privacy invasion from voice recognition speakers such as Amazon Echo (aka Alexa). Now, combined with embedded digital cameras, those speakers in televisions have eyes as well as ears.

And they’re stalking you.

So much so, in fact, that the Federal Bureau of Investigation has posted a warning about privacy invasion and cyberstalking. Here’s an excerpt:

Beyond the risk that your TV manufacturer and app developers may be listening and watching you, that television can also be a gateway for hackers to come into your home. A bad cyber actor may not be able to access your locked-down computer directly, but it is possible that your unsecured TV can give him or her an easy way in the backdoor through your router. Hackers can also take control of your unsecured TV. … In a worst-case scenario, they can turn on your bedroom TV’s camera and microphone and silently cyberstalk you.”

Consider that last scenario. Bedrooms. Two thirds of adults have televisions there, and worse, 71 percent of kids age 8-18 do, too. Imagine what a bad actor can do, recording photos, videos and audios of what occurs in our most intimate space of our homes.

The FBI recommends that you take these steps to protect yourself and family members:

  • Know exactly what features your TV has and how to control those features. Do a basic Internet search with your model number and the words “microphone,” “camera,” and “privacy.”
  • Don’t depend on the default security settings. Change passwords if you can – and know how to turn off the microphones, cameras, and collection of personal information if possible. If you can’t turn them off, consider whether you are willing to take the risk of buying that model or using that service.
  • If you can’t turn off a camera but want to, a simple piece of black tape over the camera eye is a back-to-basics option.
  • Check the manufacturer’s ability to update your device with security patches. Can they do this? Have they done it in the past?
  • Check the privacy policy for the TV manufacturer and the streaming services you use. Confirm what data they collect, how they store that data, and what they do with it.

If you are the victim of cyber fraud, you should contact the FBI’s Internet Crime Complaint Center at http://www.IC3.gov or call your local FBI office.

Interpersonal Divide in the Age of the Machine has chapters about privacy invasion at home, school and work, with sections on cyberstalking, harassment and bullying. In addition to televisions, Interpersonal Divide warns about other everyday spying appliances, including dishwashers, refrigerators and even coffee machines.

Social Media’s Power Will Influence Impeachment

As televised hearings begin on the impeachment of President Donald Trump, pundits note the former reality television star knows the power of that medium. He also knows the power of Twitter. Other impeached presidents could control legacy media. In this case, not so much.

Richard Nixon had three networks; Bill Clinton, networks, cable and digital infancy; but Donald Trump has networks, cable, Twitter, Facebook, WhatsAp, WeChat, WordPress, Tumblr, Instagram, Zoon, Skype, LinkedIn and 65+ top global platforms to contend with, all sharing, commenting on, and in many cases, lying about the news.

Short of going into hiding with no digital access, everyone will hear a version of the truth, and not necessarily the factual one, including Sen. Lindsey Graham who says he won’t read transcripts or watch televised proceedings.

Too bad. He’ll get mostly tainted if not fake news.

Impeachment news and views will be everywhere, globally, 24/7. That means citizens who only watch Fox News or MSNBC will be informed by other sources via one platform or another. There is no escaping this.

But here’s the rub: Normally, social media outlets discuss a wide range of topics–the day’s news, celebrity gossip, sports, etc.; but what happens when everyone is focused on the same topic: impeachment.

Now political affiliations come into play, and not in a factual way.

The one in five adults who follow Trump on Twitter will hear upsetting or affirming opinions from others migrating across platforms. Conservatives who support the president and rely on radio or cable news will be fending off liberal Democrats who are most apt to use social media to influence and rally others. Very liberal Democrats and very conservative Republicans will continue to spread their views on Facebook, which migrates to Instagram, so relatives and friends will be arguing through Thanksgiving and into the December and January holidays.

The edge in all these debates will go to viewers who watch the televised proceedings without media affiliation or mediation. Those who already made up their minds about whether the President should be impeached will be getting their information second-, third- and fourth-hand (or worse).

So, if you must rely on news sites, these are the most reliable, according to Forbes magazine:

1. The New York Times

2. The Wall Street Journal

3. The Washington Post

4. BBC

5. The Economist

6. The New Yorker

7. Wire Services: The Associated Press, Reuters, Bloomberg News

8. Foreign Affairs

9. The Atlantic

10. Politico

But you don’t need to subscribe or access their websites. In this momentous case, you can watch the proceedings and make up your own mind without all that social media noise.

In issues of this magnitude, it is important to get the facts and to act on them, not only at the dinner table but especially in the voting booth. Listen, analyze and decide without being tainted by the divisive views of social media and media sites that align with political parties.

For more on the power of social media, and how you can mitigate its effects, read Interpersonal Divide in the Age of the Machine.

Social Media Amplifies Stereotypes

University of Missouri Athletic Department wanted to promote NCAA’s diversity week but sparked dissent at how African Americans were depicted. Think before you tweet, or suffer a similar fate.

The intent of the tweet was proactive, celebrating diversity by promoting aspirations of athletes. It had the opposite effect.

Included in the photo above were track athlete Arielle Mack, depicted with the slogan “I am an African American woman.” Ticket office employee Chad Jones-Hicks appeared above the statement, “I value equality.” The tagline for white gymnast Chelsey Christensen read “I am a future doctor”; the one for swimmer C.J. Kovac, proclaimed, “I am a future corporate financer (sic).”

The misspelling of “financier” indicates lack of fact-checking. Had someone analyzed each word of the post, perhaps disparities could have been avoided.  To be sure, Mack and Jones-Hicks have aspirations on par with Christensen and Kovac, but instead the emphasis there was on race.

Anything on internet can go viral, undermining intent and tainting an organization’s reputation. Clearly, Mizzou Athletics wanted to celebrate diversity and never meant the post to be demeaning.

According to the Washington Post, the tweet was based on a video containing this quote from Mack:  “I am an African American woman, a sister, a daughter, a volunteer and a future physical therapist.” The tagline, of course, should have been “future physical therapist.”

Perhaps one errant tagline could be forgiven; but in this case, there were three.

Sprinter Caulin Graves said, “I am a brother, uncle and best of all, I am a leader [emphasis added].” This is how Graves was depicted:

The Athletic Department apologized for the tweet with another tweet containing a video upon which the errant post was based:

The video, a professional product, has much to commend it. However, the stereotypical tweet undermined that effort.

Vincent Filak, who covered the issue in the Dynamics of Writing websitehad these recommendations:

  • Scrutinize each word of any post to guard against stereotypes.
  • Ask for a second opinion if you unsure that you are disparaging anyone.
  • Run the content by a source included in the content for his or her opinion.
  • Talk to an expert who may have insight or advice on inclusion.

Filak adds, “Even if your newsroom, your PR firm or your ad agency doesn’t have a cornucopia of diversity, you can still avoid dumb mistakes by asking for help.”

Take time with social media posts. Think critically or risk being the target of criticism.

Robotic Hiring Systems and Discrimination

Companies using machine hiring systems might delete potential employees in violation of federal law prohibiting bias based on race, disability, age and other factors. Humans must honor protected classes in interviews while AI vendors protect proprietary algorithms.

In the above video, Wall Street Journal senior correspondent Jason Bellini covers the pros and cons of robotic hiring systems. He interviews Kevin Parker, CEO of HireVue, who says his platform is more objective than traditional interviews because it removes bias from the hiring process. However, Bellini also interviews Ifeoma Ajunwa, legal scholar and labor law professor at Cornell University who challenges that view.

First, some legal background:

Employers who interview job applicants must adhere to tenets of Title VII of the Civil Rights Act of 1964, which forbids discrimination based on national origin, age, disabilities and other factors. Questions must be free of bias. For instance, an interviewer may not inquire about a candidate’s height, weight and marital status.

No doubt AI programmers have taken Title VII into account when phrasing interview questions, such as found in this tip sheet by the University of New England. But that’s not where algorithmic discrimination might occur.

That bias might be subtle, programmed into an algorithm adapted to the hiring company’s idea of an “ideal” job candidate. People might be excluded without anyone knowing if the robot is measuring facial features for age, weight, symmetry, voice tone or other distinguishing human feature. There is no real way of knowing without examining the proprietary program.

Dr. Ajunwa addresses this concern in an NPR interview:

So that’s where it gets more complicated – right? – because a job applicant could suspect that the reason they were refused a job was based on characteristics such as race or gender, and this is certainly prohibited by law. But the problem is how to prove this. So the law requires that you prove either intent to discriminate or you show a pattern of discrimination. Automated hiring platforms actually make it much harder to do either of those.

And a lot of times, the algorithms that are part of the hiring system, they are considered proprietary, meaning that they’re a trade secret. So you may not actually be able to be privy to exactly how the algorithms were programmed and also to exactly what attributes were considered. So that actually makes it quite difficult for a job applicant.

Benetech, a nonprofit whose mission is “to empower communities with software for social good” is concerned about AI hiring systems discriminating against people with disabilities. The company discusses key findings of a 2018 study titled “Expanding Employment Success for People with Disabilities“:

  • Artificial intelligence tools are increasingly widespread and vendors of these products have little understanding of their negative impact on the employment of people with disabilities.
  • The level of data collection about all of the relevant issues remains rudimentary, limiting many opportunities for improvements.
  • It is clear that employers see people with disabilities primarily through a compliance lens, and not through a business opportunity frame.

As AI hiring systems become more popular with such companies as Goldman Sachs, Unilever and Vodafone, attorneys and legislators are investigating ways to ensure algorithms are compliant with federal law.

Illinois is among the first in the nation to take on robotic hiring programs in its “Artificial Intelligence Video Interview Act,” which requires transparency and consent for any company using these algorithms.

In a post about the new law, Bloomberg Law states:

Employers increasingly are using AI-powered platforms such as Gecko, Mya, AutoView, and HireVue to streamline their recruitment processes, control costs, and recruit the best workers. Providers claim their technologies analyze facial expressions, gestures, and word choice to evaluate qualities such as honesty, reliability, and professionalism.

But the technology is also controversial. Privacy advocates contend AI interview systems may inject algorithmic bias into recruitment processes, and that AI systems could generate unfounded conclusions about applicants based on race, ethnicity, gender, disability, and other factors.

Interpersonal Divide in the Age of the Machine contains chapters that address the inherent biases of algorithmic programming. Institutional racism, subliminally associated with an organization’s target audience or bottom line, may be encoded into sophisticated robotic systems.

For instance, the Washington Post reports that a popular algorithm that identifies patients who need extra medical care “dramatically underestimates the health needs of the sickest black patients, amplifying long-standing racial disparities in medicine.”

When it comes to robotic HR systems, that’s the beginning of what awaits those the algorithm selects for employment. If technology is used to select a person for a job, one can anticipate that it will be used to monitor performance on that job.

Here’s an excerpt from Interpersonal Divide:

Machines not only monitor how employees are using devices and applications but also may be programmed to detect moods and behaviors of those employees. Machines monitor employees to an alarming degree in some companies, often under the pretext of improving performance. Stress is measured, too, although usually in a negative light. Examples include tracking a worker’s Internet and social media use; tapping their phones, emails and texts; measuring keystroke speed and accuracy; deploying video surveillance; and embedding chips in company badges to evaluate whereabouts, posture and voice tone.

Cyberlaw needs to catch up with federal labor law, especially when AI is used in hiring and firing decisions. As Bloomberg Law notes in its report, some labor law attorneys believe algorithmic systems could unintentionally screen out protected classes. One attorney cited in the above post suggests employers should test robotic systems against a pool of candidates for potential bias.

Fakes, Hacks, Hoaxes and Tall Tales: The State of U.S. Media in the Post-Truth Era

“Fakes, Hacks, Hoaxes and Tall Tales: The State of U.S. Media in the Post-Truth Era”–has been posted by Commonwealth Centre for Connected Learning at the University of Malta. Thanks to Alex Grech for his leadership and to the internationally known speakers who presented at the Post Truth Conference Oct. 10-11 in Valletta.

You can download the paper at the link below. My abstract:

“Since the 2016 presidential election in the United States, politics and journalism have combined to undermine reality to such extent that facts are alternative, and truth is not truth. All too often, social media are complicit in the obfuscation. This paper investigates that charge, exploring the role of 24/7 ubiquitous online access in creating a culture of lies, exposing inconvenient truths about American politics and news outlets in the post-truth era.”

https://connectedlearning.edu.mt/wp-content/uploads/2019/10/Michael-Bugeja.pdf

Can long-form journalism bring readers back by learning from the literary essay? 

In this abbreviated post, you can view how consumer technology has slowly eroded the audience for long-form or slow journalism. Below you’ll find a link to the Online Journalism Blog where we share 17 rhetorical concepts that can mitigate the smartphone effect.

In 2016 a Pew report looked at how readers interacted with over 74,000 articles on their mobile phones. It concluded that long-form reporting was holding its own despite the shift to mobile, boasting a higher engagement rate (123 seconds compared with 57.1 for short-form stories) and the same number of visits:

“While 123 seconds – or just over two minutes – may not seem long, and afar cry from the idealized vision of citizens settling in with the morning newspaper, two minutes is far longer than most local television news stories today.”

Long-form articles get twice the engaged time and about the same number of visitors on mobile

Tweaking the concept of long-form

But buried in the report were some problems: only 3 percent of long-form and 4 percent of short-form news returned to the content once they left it — and both types of articles had brief lifespans after content was posted, with interaction after three days dropping by 89 percent for short-form and 83 percent for long-form.

Moreover, an “overwhelming majority of both long-form readers (72%) and short-form readers (79%) view just one article on a given site over the course of a month on their cellphone.”

Long-form content appeared to be performing better than short-form content on most measures — but it was a pretty low bar.

If the genre is to survive in the current digital environment the prevailing concept of long-form journalism, it seems, still needs tweaking so that readers read more stories, return to them more frequently in order to finish them, and engage for even longer periods.

To view the 17 rhetorical terms, visit Online Journalism Blog: https://onlinejournalismblog.com/2019/10/05/longform-narrative-rhetorical-concepts/

Media Ethics: Behind The Carson King Saga

BEN KIEFFER, MATTHEW ALVAREZ, RICK BREWER, JULIA DIGIACOMO

 Interview begins at the 6:50 mark

Carson King, 24, has raised over $1 million for charity and a wave of controversy after going viral with a sign requesting beer money at the Cyclone-Hawkeye game on Sept. 14. Media Ethicist and Iowa State University Professor Michael Bugega joins this ‘News Buzz’ edition of River to River to give his perspective on the investigation of Carson’s past tweets by The Des Moines Register and the following backlash.