New Zealand mosque attack shows need for Congress to regulate Facebook

Des Moines Register

Copyright 2019 Des Moines Register

Some 13 years ago, I alerted the higher education community about the misuse of a new social medium, noting that 20,247 of 25,741 students at Iowa State University were already registered, although many faculty and administrators had never heard about it.

The piece, “Facing the Facebook,” appeared in The Chronicle of Higher Education. Here’s an excerpt:

“On many levels, Facebook is fascinating — an interactive, image-laden directory featuring groups that share lifestyles or attitudes. Many students find it addictive, as evidenced by discussion groups with names like ‘Addicted to the Facebook,’ which boasts 330 members at Iowa State. Nationwide, Facebook tallies 250 million hits every day and ranks ninth in overall traffic on the Internet.”

In late 2005, when I researched Facebook for my Chronicle piece, the platform boasted 5.5 million users. In 2012, Facebook surpassed 1 billion users. It tallied 2.32 billion active users at the end of 2018. If you count the company’s WhatsApp, Instagram and Messenger, that figure rises to 2.7 billion.

In sum, the company’s total registered users are about the size of the populations of China and India combined.

That’s a lot of power. That’s a lot of profit.

For the rest of the op-ed, click here or visit: https://www.desmoinesregister.com/story/opinion/columnists/iowa-view/2019/03/19/new-zealand-mosque-attack-shows-need-congress-regulate-facebook/3205769002/

Wiretap v. Photoshop in college admissions scandal

Photoshopped stock images of athletes–manipulated with applicant faces alongside fake profiles–were used in a cheating scandal to ensure admission into elite colleges. Parents paid millions to an organization whose digital methods were no match for modern-day wiretap technology.

An account of the cheating scandal, published in Inside Higher Ed, has led to 50 indictments involving non-athlete applicants, bribed coaches and rigged SAT/ACT scores to ensure acceptance at elite and competitive colleges.

Among those indicted are actresses Felicity Huffman and Lori Loughlin, along with wealthy parents in law and business. They paid millions in a rigged system so that their children could take slots that other applicants deserved based on grades, test scores and/or athletic abilities.

Federal investigators used wiretaps to gather evidence against the accused and the scheme’s mastermind, Rick Singer, who ran Edge College & Career Network and a foundation created to conceal bribe money.

Not all cases involved non-athletes taking recruitment slots reserved for worthy applicants with athletic ability. However, Division I coaches from Georgetown, Stanford, Texas, UCLA, USC, Wake Forest and Yale were charged in the scheme. Use of recruiting slots reportedly was one additional method of ensuring acceptance.

The lesson here, however, concerns the sophisticated technology of modern-day wiretap in federal investigations. Cornell Law School lists these methods:

Examples of electronic surveillance include: wiretapping, bugging, videotaping; geolocation tracking such as via RFID, GPS, or cell-site data; data mining, social media mapping, and the monitoring of data and traffic on the Internet. Such surveillance tracks communications that falls into two general categories: wire and electronic communications. “Wire” communications involve the transfer of the contents from one point to another via a wire, cable, or similar device. Electronic communications refer to the transfer of information, data, sounds, or other contents via electronic means, such as email, VoIP, or uploading to the cloud.

Technology used in the cheating scandal was easily detected. In some cases, ordinary computer programs were used to manipulate images and create fake digital content. Here’s an example obtained through wiretap:

At this point, universities have not indicated how they will deal with students accepted fraudulently via fake admissions. Reports indicate that many such students did not realize what their parents had done to get them into top programs.

In one case, prior YouTube posts by Lori Loughlin’s daughter Olivia caused a stir on social media.

Reportedly, Lori Loughlin and her husband Mossimo Giannulli, agreed to pay $500,000 in bribes to have their two daughters, Isabella, 20, and Olivia, 19, “designated as recruits to the USC crew team — despite the fact that they did not participate in crew — thereby facilitating their admission to USC.”

Interpersonal Divide in the Age of the Machine includes chapters on use of technology to create fake and misleading content and the ramifications of that at home, school and work.

Here’s an excerpt:

Media and technology have always manipulated self-image, values, and perception. However, the current high-tech era is unique because of the power of the electronic tools, the time that we spend using them, the tasks that we relegate to machines out of convenience, and the influence of the corporations that manufacture them. The net result is a blurring of boundaries. The real and virtually real—including augmented reality, or computer enhanced views of life and locale (as in GPS technology)—have blended to such degree that we cannot always correctly ascertain what is genuine and enduring from what is artificial and fleeting. That type of confusion comes with its own set of interpersonal and societal consequences, complicating our lives and relationships, not because we are necessarily dysfunctional, but because we have forgotten how to respond ethically, emotionally, and intellectually to the challenges, desires, and opportunities of life at home, school and work.

The societal consequences in the college admissions scandal is a prime case of privileged people failing to respond ethically, emotionally and intellectually. Now they face the consequences. As Boston US Attorney noted, “There can be no separate college admission system for the wealthy, and I’ll add there will not be a separate criminal justice system, either.”

Live-link hospital robot delivers death prognosis

Family of Ernest Quintana was angered that a doctor used a robot’s live video screen to say he could do nothing else for the dying patient. His granddaughter says using the machine lacked compassion. That, and a whole lot more.

Annalisia Wilharm expected a doctor to enter her grandfather’s hospital room at Permanente Medical Center in Fremont, Calif. Instead, she told CNN, she saw a nurse wheel in a robot with a physician delivering the news via a video screen. She didn’t know the doctor or where he was when he recommended a morphine drip for 78-year-old Ernest Quintana, who died the next day.

Wilharm told CNN: “We knew that we were going to lose him. Our point is the delivery (of the news). There was no compassion.”

The hospital issued a statement noting the video technology allowed a live conversation to take place and that a nurse was in the room to explain how the machine functioned. The hospital reportedly does not encourage the use of technology for patient-doctor interactions and acknowledged the incident fell short of the family’s expectations.

Interpersonal Divide in the Age of the Machine (Oxford, 2018) has prophesied increasing use of robots in medicine, noting that they can assist physicians with procedures. However, use of a live video link through a robot-appearing machine is neither compassionate nor practical when terminal prognoses are delivered.

The theme of Interpersonal Divide is based in part on the philosophy of French-Maltese social critic Jacques Ellul: Technology changes everything it touches, without its self being changed much at all. Relying on a nurse to explain the technology makes that person an IT rather than medical expert.

Finally, it doesn’t matter that the doctor delivered the news via a live link because the medium in this case was the message. The robot-looking machine asserted its presence in McLuhan fashion.

That’s the lesson for Permanente Medical Center.

Zuckerberg Resurrects Value of Privacy: Silly Us, We Thought It Was Dead

In 1999, CEO Scott McNealy of Sun Microsystems prophesied the future with this quote: “You have zero privacy anyway. Get over it.” Facebook CEO has tried to get under and around privacy, earning billions in the process. Now he wants to resurrect it, potentially threatening news media business models.

Mark Zuckerberg plans to integrate Facebook, Instagram, WhatsApp and Messenger so that users can text each other across those platforms, creating a “digital living room” whose chief attribute would be privacy.

In a lengthy blog post, Zuckerberg wrote:

As I think about the future of the internet, I believe a privacy-focused communications platform will become even more important than today’s open platforms. Privacy gives people the freedom to be themselves and connect more naturally, which is why we build social networks.

He laid out this vision:

  • Private interactions. People should have simple, intimate places where they have clear control over who can communicate with them and confidence that no one else can access what they share.
  • Encryption. People’s private communications should be secure. End-to-end encryption prevents anyone — including us — from seeing what people share on our services.
  • Reducing Permanence. People should be comfortable being themselves, and should not have to worry about what they share coming back to hurt them later. So we won’t keep messages or stories around for longer than necessary to deliver the service or longer than people want them.
  • Safety. People should expect that we will do everything we can to keep them safe on our services within the limits of what’s possible in an encrypted service.
  • Interoperability. People should be able to use any of our apps to reach their friends, and they should be able to communicate across networks easily and securely.
  • Secure data storage. People should expect that we won’t store sensitive data in countries with weak records on human rights like privacy and freedom of expression in order to protect data from being improperly accessed.

The New York Times analyzed these functions, noting that they were proposed following years of privacy invasion and scandal.

Foreign agents from countries like Russia have used Facebook to publish disinformation, in an attempt to sway elections. Some communities have used Facebook Groups to strengthen ideologies around issues such as anti-vaccination. And firms have harvested the material that people openly shared for all manner of purposes, including targeting advertising and creating voter profiles.

The Columbia Journalism Review speculated on a motive for Zuckerberg resurrecting privacy as a core value, questioning whether “hateful or violent content will soon appear in private rather than public messages,” meaning the company no longer would be liable in any privacy-invasion litigation. “The latter question has already come up in India, where much of the violence driven by WhatsApp has been fueled by messages posted in private groups.”

The magazine also noted that these new steps to secure privacy for users might impact journalism, affecting distribution of news and data-mining through social media, a continuous Facebook surveillance and selling feature. That threatens ad revenue, especially since media business models have been built around Facebook’s algorithms.

Interpersonal Divide has covered Facebook since its inception in the first and second editions, with particular attention to privacy and datamining. Here’s an excerpt:

As such, billions of users worldwide may been seen as exploited workers who spend hours each day allowing their personal information to be mined and sold and who provide content that engages others and generates more data for profit-minded creators and stockholders of Facebook, Twitter, LinkedIn, Instagram, and other popular venues.

The text also discusses how Facebook disseminated fake news associated with the 2016 presidential election.

The author’s latest work, Living Media Ethics (Routledge, 2019), blames Facebook for disseminating fake news as avidly as fact-based journalism, threatening democracy because fewer people cipher real from fabricated reports. Here’s an excerpt:

Social media, especially Facebook, has become the primary disseminator of false news reports, prompting the company and FactCheck.org to partner in an attempt to flag fabricated “news.” The initiative was triggered by false news during the 2016 presidential campaign.[1] FactCheck.org recommends that reporters and viewers consider the source of information, read content carefully before jumping to conclusions, and verify the reputation of the author or group disseminating stories.

FactCheck cites these warning signs:

  • Did a reader or viewer send you a tip and social media link based on a bias that you both may share or that your media outlet has supported in the past?
  • Is the headline or title of a report sensationalized with content about what might occur hypothetically if a sequence of events takes place?
  •  Is the content of an alleged news report undated or based on events that might have happened in the past, falsely depicted as happening in the present?

[1] Sydney Schaedel, “How to Flag Fake News on Facebook,” FactCheck.org, July 6, 2017, http://www.factcheck.org/2017/07/flag-fake-news-facebook/
 

AI is efficient; language, not so much: Why we worry about robots

Facebook chatbots were asked to trade online like people do; but the test was shut down after the bots created and chatted in their own language. The linguistic experiment failed because English is inefficient. That’s how AI disasters may occur.

In 2017, Facebook programmed chatbots to make a trade the way that people do online, assigning values to hats, balls and books; but the engineers failed to set one programming requirement: use everyday English.

The English language is effective but also inefficient. Its structure, grammar, syntax and meaning are often nonsensical as many of us learned in grammar school (apt name), trying to spell words with “i” after “e.” We were told “except after ‘c.'” Uh huh. Some experts calculate that 44 words actually follow the rule with 923 that do not.

That’s hardly the end of illogical English. It’s positional, meaning words change meaning depending on the place in a sentence. Take this 10-word example: “Stop drinking this instant; that tea is better for you.” Here’s a short sample of several meanings, oral and verbal, just by changing positions of words.

English is far more complicated than that. In fact, we do not even know how many words exist in it. According to Oxford Dictionaries,

There is no single sensible answer to this question. It’s impossible to count the number of words in a language, because it’s so hard to decide what actually counts as a word. Is dog one word, or two (a noun meaning ‘a kind of animal’, and a verb meaning ‘to follow persistently’)? If we count it as two, then do we count inflections separately too (e.g. dogs = plural noun, dogs= present tense of the verb). Is dog-tired a word, or just two other words joined together? Is hot dog really two words, since it might also be written as hot-dog or even hotdog?

The Second Edition of the 20-volume Oxford English Dictionary has entries for “171,476 words in current use, and 47,156 obsolete words” along with 9,500 derivative words.

Machines think, “Why so many words to say the same thing? And then the same word to mean many things? Why assign different meanings beyond 0 and 1? People are inefficient. Let’s create our own language.”

That’s exactly what the bots did.

In the failed experiment, Facebook bots were dubbed Bob and Alice, names engineers use as placeholders. The Independent writes that the bots operated “on the machine value of efficiency.”  As English impeded trade-making, Bob and Alice spoke to each other in code–literally and figuratively. Here’s the transcript:

Bob: i can i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i i can i i i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i . . . . . . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i i i i i everything else . . . . . . . . . . . . . .

Alice: balls have 0 to me to me to me to me to me to me to me to me to

Bob: you i i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

It appears as if the bots were emphasizing the importance of self (to me to me to me). That’s understandable. Ever since the iPhone 4 front-facing camera, we have been programming machines to think the individual is worth more than the collective–sorry Star Trek–because sales are based on a person’s algorithmic profile.

People are little nodes with bad language.

Engineers will have to focus on language and its myriad shades of meaning or invent a more efficient AI vocabulary apart from computer code. Here’s the thing: At Facebook and other social media, people monitor posts, relying on machines to filter ominous words and threats. In the future, we will monitor machines to see if they are using lingo against humanity.

Interpersonal Divide in the Age of the Machine dedicates a chapter to artificial intelligence and robotics, including a section on machines developing their own codes apart from human moral ones, leading to the dreaded singularity when super-intelligent machines decide that people are inefficient.

Iowa State alumnus and former BuzzFeed writer discusses layoffs

Digital advertising increasingly is going to mega-tech companies like Google and Facebook, causing a ripple effect in fact-based journalism, with hundreds laid off last week. In this post, Tyler Kingkade–recently let-go BuzzFeed writer–has an optimistic outlook about the future of journalism.

A New York Times op-ed, “Why the Latest Layoffs Are Devastating to Democracy,” discusses recent layoffs across media platforms, including two hundred staff and journalists at BuzzFeed as well as 800 from Yahoo, Huffington Post, TechCrunch and other outlets. Gannett reportedly is letting go an additional 400.

According to the piece, a chief concern involves digital advertising going to media monopolies such as Google and Facebook:

The cause of each company’s troubles may be distinct, but collectively the blood bath points to the same underlying market pathology: the inability of the digital advertising business to make much meaningful room for anyone but monopolistic tech giants. The cause of each company’s troubles may be distinct, but collectively the blood bath points to the same underlying market pathology: the inability of the digital advertising business to make much meaningful room for anyone but monopolistic tech giants.

Tyler Kingkade, an outstanding alumnus from Iowa State’s journalism school, who worked for the Huffington Post and most recently BuzzFeed, was one of the employees who received a pink slip.

He sent this message to media ethics and tech/social change classes at his alma mater:

“It’s admittedly concerning if BuzzFeed had to downsize. Particularly in our News division, they laid off reporters who were in the process of turning our work into documentaries, which was a new avenue of making money. However, the ones laid off have gotten a lot of people flagging job openings for us, or asking to meet about giving us jobs. Even the LA Times, which has reduced its staff, is now building back up. The Seattle Times oddly enough is on a hiring spree.

“Journalism is a field that does not grow – there is never going to be a boom time for us. But I don’t believe it will ever dry up. People will figure out a stable model, whether it’s through selling story rights to be TV shows and movies (see Dirty John for a recent example) or subscription or a nonprofit donor model like ProPublica. I bet there will soon be something that gives you a bundle of subscriptions, in the same way that Spotify got people to finally stop illegally downloading music and pay for it again.

“The currents against you in media will always be strong; young journalists will just need to learn how to be strong enough to swim against it.”

Kingkade, based in New York, focuses on covering civil rights, crime, sexual harassment and assault, and the treatment of teens in vulnerable and traumatic situations. His work has earned multiple awards, recognition from national nonprofits, pushed companies and prosecutors to take action, and caused inquiries by universities and lawmakers. Most recently he was a National Reporter at BuzzFeed News in New York … and is currently looking for his next assignment.

Foxconn Reconsiders $10 Billion Project: We Told You So in 2017

Any time legislators give tax incentives to corporations, they must inquire about artificial intelligence and negotiate iron-clad deals. Journalists also have an obligation to research companies’ past histories with robotics.

Interpersonal Divide posted about Foxconn’s planned Wisconsin plant in July 2017, proclaiming “Say Hi to C-3PO.” At the time few politicians mentioned how Foxconn replaces workers with robots.

Foxconn’s use of artificial intelligence was covered in Interpersonal Divide in the Age of the Machine:

In 2016, Foxconn Technology Group, an Apple and Samsung supplier in China, replaced 60,000 workers with robots. If any country should be concerned about automation, China might top that list with its population of 1.35 billion people. In reporting the mass firing—a population larger than Pensacola, Florida—the British Broadcasting Company noted that economists “have issued dire warnings about how automation will affect the job market, with one report, from consultants Deloitte in partnership with Oxford University, suggesting that 35% of jobs were at risk over the next 20 years.” See: http://www.bbc.com/news/technology-36376966

Earlier plans to build the Wisconsin plant promised jobs to thousands of blue-collar workers making LCD screens. That changed this week. Reuters reports that Foxconn Technology Group “said it intends to hire mostly engineers and researchers rather than the manufacturing workforce the project originally promised.”

The British wire service also reported that Foxconn’s “technology hub” in Wisconsin “would largely consist of research facilities along with packaging and assembly operations.”  The report quoted a company spokesperson: “In Wisconsin we’re not building a factory. You can’t use a factory to view our Wisconsin investment.”

As companies ask states for tax incentives to build plants, promising thousands of jobs for local residents, legislators have a responsibility to make iron-clad deals. This remains an essential discussion as the state of Wisconsin may be paying as much as $2 billion in incentives, according to a report in the Chicago Tribune. The Tribune was one of the few U.S. newspapers to cite the mass firing of Foxconn workers in China to increase profit by replacing them with robots.

Any time a new plant is announced, involving tax incentives, journalists need to monitor how robotics are going to be used.

Interpersonal Divide warns that artificial intelligence and robotics are going to replace jobs across the manufacturing sector: “Two-thirds of Americans believe that robots will do much of the work currently being done by people, according to the Pew Research Center; however, 80 percent of respondents believe that their own jobs and professions will be largely unaffected.”

The only way to know in advance is to put companies on record concerning artificial intelligence and negotiate contracts that companies like Foxconn have to honor.