New Twitter study shows older users spread fake news

New study of 16,000 Twitter users reveals less use of bogus content on the portal than anticipated with the majority of those spreading fake news finds tending to be older and politically conservative.

A report in Science about a new Twitter study shows a mere “0.1% of the more than 16,000 users shared more than 80% of the fake news …  and 80% of that fake news appeared in the feeds of only 1.1% of users.

A team led by David Lazer, a political scientist at Northeastern University, analyzed tweets from 16,442 registered voters who used Twitter during the 2016 election.

According to the report in Science,

One of the most popular sources of misinformation identified by the study is a site called “The Gateway Pundit,” which published false headlines including: “Anti-Trump Protesters Bused Into Austin, Chicago” and “Did a Woman Say the Washington Post Offered Her $1,000 to Accuse Roy Moore of Sexual Abuse?”

The Gateway Pundit is one of several sites identified as fake news sites. Here is a list of such sites compiled by Snopes (warning: images on this page and links to fake news reports are disturbing; please do not access if you feel content will offend you.)

The Northeastern University study comes on the heels of another concerning fake news on Facebook.

A Marketwatch report found similarities between the two studies, stating the spread of false information on Facebook found “few people shared fakery, but those who did were more likely to be over 65 and conservatives.”

Interpersonal Divide in the Age of the Machine contains information in several chapters associated with social media use and how it influences our thoughts, words and deeds at home, school and work.

This site also published a guide to avoid fake news and access legitimate news sites.

Michael Bugeja, author of the guide and Interpersonal Divide, asks social media users to think like a journalist, adopting these four traits:

1. Doubt — a healthy skepticism that questions everything.
2. Detect — a “nose for news” and relentless pursuit of the truth.
3. Discern — a priority for fairness, balance and objectivity in reporting.
4. Demand — a focus on free access to information and freedom of speech, press, religion, assembly and petition.

A recent op-ed by Bugeja in the Des Moines Register also documents how fake news taints the journalism profession because users do not distinguish between journalism and media.

Here’s what you can do about fake news before the 2020 election

Des Moines Register

By Michael Bugeja, Iowa View Contributor

In 2019, Iowans will hear the phrase “fake news” whenever a report sullies a political party or presidential hopeful. We may support or scorn candidates without knowing fact from factoid.

This column explains what you can do about it.

People typically do not differentiate between journalism and media. Journalists report and edit news. Media mostly disseminate news (i.e. tweets, posts, blogs, websites, android apps, etc.). Journalists adhere to ethical standards. Social media does not.

Many voters no longer believe what they read, view or hear. We have a choice: Embrace lies and half-truths or subscribe (actually pay something) to access fact-based reports.

For the rest of the post, click here or visit: https://www.desmoinesregister.com/story/opinion/columnists/iowa-view/2019/01/22/heres-what-you-can-do-fake-news-before-2020-presidential-election-journalism-media-russia-facebook/2647269002/

Twitter suspends account that spread viral confrontation

CNN and Washington Post put the controversial encounter into context concerning a Native American veteran and a group of high school students wearing MAGA hats at the nation’s capitol.

This post discusses how technology spreads incomplete information via non-journalists who might have personal or political agendas. It does not attempt to discuss the specifics of the encounter but the technology behind it as an act of media manipulation.

For a more comprehensive report on the confrontation, see this article by the Washington Post, which discusses responsibility of the high school boys’ chaperones as well as what other bystanders observed.

The Post also provides a more complete version of the encounter in the above video, featuring an interview with Nathan Phillips, the Native American veteran seen chanting and playing a drum.

The student, Nick Sandman, released his own statement. In it, he writes:

The protestor [sic] everyone has seen in the video began playing his drum as he waded into the crowd, which parted for him. I did not see anyone try to block his path. He locked eyes with me and approached me, coming within inches of my face. He played his drum the entire time he was in my face. I never interacted with this protestor [sic]. I did not speak to him. I did not make any hand gestures. …  To be honest, I was startled and confused as to why he had approached me. We had worried that a situation was getting out of control where adults were attempting to provoke teenagers. I believed that by remaining motionless and calm, I was helping to diffuse the situation. …  

The viral video only showed what appeared to be a confrontation between Sandman and Phillips with the student intentionally blocking the veteran’s path.

CNN business reported the following about the viral video:

    • A more complete video was posted on Instagram by a person attending the event.
    • An account of the video by @2020fight featured only the segment of Sandman and Phillips with this caption: “This MAGA loser gleefully bothering a Native American protester at the Indigenous Peoples March.”
    • 2020@fight was said to belong to a California schoolteacher. But the profile photo depicted a Brazilian blogger.
    • The 2020@fight account tweeted on average 130 times a day and had more than 40,000 followers.
    • A network of anonymous accounts were working to amplify the video.
    • Multiple newsrooms tried unsuccessfully to contact @2020fight.

After @2020fight’s video was released, it made national news and was retweeted 14,400 times, according to CNN Business.

The Washington Post video above shows how a journalist would have handled the encounter, interviewing a main participant and also not sensationalizing the taunts directed at the students by a small group of protesters. It showcases attempts to be fair and balanced … after the fact.

Conversely, many media outlets ran with the viral version of the encounter without vetting it as CNN and the Washington Post did later. By then, however, the controversial video had been viewed on social media more than 2.5 million times.

Interpersonal Divide in the Age of the Machine discusses how technology manipulates media with sections about fake news that drive political agendas, as happened in the 2016 presidential election.

Should the Media Have Reported Un-Redacted Manafort Content?

Omitted from the buzz about the poorly redacted court filings associated with former Trump campaign manager Paul Manafort is the ethics of un-redacting and reporting sensitive content filed in U.S. courts.

In the 2016 presidential campaign, digital subterfuge was a key component, from creation of fake news to sale of Facebook user data. You’d think court filings on convicted Trump-campaign associate Paul Manafort might have been properly redacted.

Nope.

Reporters hungry for more information about Special Counsel Robert Mueller’s investigation checked to see if a mistake was made in redacting a sensitive document prepared by Manafort’s attorneys.

It’s a common error. Tech experts will tell you that thousands of redacted documents online can be easily manipulated to view content. Often a staff person or official uses black boxes that can be moved or removed from a document or selects and conceals passages with black background, which of course and be removed. Just select the passage and use a white background, exposing the text.

In this case, Manafort’s lawyers had filed a response to an allegation that he lied to prosecutors. However, on page 5, either his attorneys or Mueller’s staffers did not “flatten” the PDF so that the redacted passages could not be read.

Adobe has a tool that properly redacts (i.e. flattens) content, also shown in the above video. Other ways include taking a photo of the document and making a PDF out of that or printing the document, using a felt pen to redact and scanning it back into a PDF.

An ethical question, largely ignored by the media, is whether reporters should have disclosed the sensitive information as it was not intended for public consumption. Perhaps the disclosure would cause prosecutors or defense attorneys to change their strategy or even taint the ongoing investigation.

The media associated the disclosure with collusion, reporting that Manafort may have met with a Russian intelligence contact and provided polling data from the Trump campaign.

According to the Washington Post:

Attorneys for Paul Manafort, Trump’s former campaign chairman, inadvertently included a big reveal in a court filing on Tuesday through their clumsy failure to properly redact key portions. They admitted that during the 2016 campaign Manafort and his longtime associate Konstantin Kilimnik, who the FBI has said has ties to Russian intelligence, discussed a peace plan for Ukraine and that Manafort also shared with him political polling data.

As for media ethics, It seems the standard seems situational: “If you make a digital mistake, we are absolved and so can report confidential information.”

Perhaps not in this case, but one nevertheless can imagine other scenarios when the dissemination of such information could pose a national security threat.

In the digital age, someone viewing improperly redacted court filings is going to disclose the content. As soon as one party disseminates that, others will un-redact and report.

Ultimately, then, the government and officers of the court have a responsibility to know how to use digital tools before filing sensitive documents in the U.S. Court system.

Washington Post: Deep Fake AI Technology Targets Women

WARNING: Sensitive material. Content involves artificial intelligence weaponized against women.

The Washington Post reports a new disturbing use of artificial intelligence–in a free app, no less–that enables users to past the image of anyone onto the face of someone else depicted in a video. The menu of “deepfake” unethical issues are myriad but increasingly target women.

According to a Dec. 30, 2018 article by Drew Harwell,   

Supercharged by powerful and widely available artificial-intelligence software developed by Google, these lifelike “deepfake” videos have quickly multiplied across the Internet, blurring the line between truth and lie. But the videos have also been weaponized disproportionately against women, representing a new and degrading means of humiliation, harassment and abuse.

The Post reports that actress Scarlett Johansson’s face has been superimposed into dozens of graphic sex scenes now available on Internet. There is also a growing concern that the technology can use images from social media like Facebook and superimpose them on similar explicit videos as a new type of AI revenge porn.

The fakes “are explicitly detailed, posted on popular porn sites and increasingly challenging to detect.” Worse, the Post article states that victims may have little recourse as the legality of the technology has yet to be challenged and may even be protected by the First Amendment unless associated with existing laws on defamation, identity theft or fraud.

An anonymous online community of creators is instructing others on how to create deepfake videos, a dangerous new weapon in the troll arsenal.

Interpersonal Divide in the Age of the Machine covers similar abuses of AI in several chapters, prophesying these new technologies will erode our perception of the world so that we no longer can discern what is real or fake. That impacts how we interact with others and the manner in which we experience the world.

The thesis of the book documents how “moral code is corrupted by machine code.”

NYT Report: Facebook Allowed Tech Giants Access to Personal Data

Facebook routinely allowed favored tech companies–including Amazon, Netflix and Spotify–unencumbered access to users’ personal data, “effectively exempting those business partners from its usual privacy rules,” according to the New York Times.

One of the ways Facebook facilitated the favored status of tech giants was through Microsoft’s Bing search engine, which gave access to “virtually all Facebook users’ friends without consent,” the Times reported in its investigation.

For insight into the scope of the Facebook practices, consider this excerpt from the Times’ article:

Facebook also allowed Spotify, Netflix and the Royal Bank of Canada to read, write and delete users’ private messages, and to see all participants on a thread — privileges that appeared to go beyond what the companies needed to integrate Facebook into their systems, the records show. … Spokespeople for Spotify and Netflix said those companies were unaware of the broad powers Facebook had granted them.  

Here are other disclosures from the Times’ report:

  • Yahoo could view real-time feeds of friends’ posts. “A Yahoo spokesman declined to discuss the partnership in detail but said the company did not use the information for advertising.
  • Facebook’s internal records show deals with more than 60 makers of smartphones, tablets and other devices.
  • Facebook allowed Apple to hide from Facebook users “all indicators that its devices were asking for data. Apple devices also had access to the contact numbers and calendar entries of people who had changed their account settings to disable all sharing. …”

Interpersonal Divide’s author Michael Bugeja was one of the first in the nation to criticize Facebook practices, as detailed in this January 2006 article in the Chronicle of Higher Education, titled “Facing the Facebook.”

Here are other Facebook-related posts from the latest edition of Interpersonal Divide in the Age of the Machine:

This latest disclosure continues to show Facebook’s questionable business practices in yet another attempt to profit from users’ personal data.

The Times’ believes personal data “is the oil of the 21st century, a resource worth billions to those who can most effectively extract and refine it.” The newspaper notes that Facebook has never sold its user data. “Instead, internal documents show, it did the next best thing: granting other companies access to parts of the social network in ways that advanced its own interests.”