Algorithms Target Users by Which Russia Undermines Our Elections

Facebook’s admission that it sold advertising to a Russian troll farm targeting users with fake news, to manipulate an election, is reminiscent of Bolshevik leader Vladimir Lenin’s famous quote: “The Capitalists will sell us the rope with which we will hang them.”

In the age of the machine, that famous dictum might read: “Facebook will sell us the data by which we will undermine their elections.”

Facebook, whose net worth is estimated at $1 trillion, is capitalist to the core, especially since it is powered and empowered by machines. Its influence is legendary, approaching 2 billion users worldwide.

In an article titled “Should Facebook Ads Be Regulated Like TV Commercials,” The Atlantic reported:

Facebook disclosed to congressional investigators that it sold $100,000 worth of advertisements to a troll farm connected to the Kremlin surrounding the U.S. presidential election. These advertisements, which targeted voters with divisive political content, added even more evidence of Russia’s attempts to meddle with the election.

The problem about free speech and Internet regulations is beyond the scope of this post, which concerns the data machines collect about Facebook users and the algorithms they generate for sale to anyone regardless of intent.

The Atlantic article notes that social media, including Facebook, Twitter and YouTube, have removed inflammatory content from their platforms. The article further notes that Google and web-hosting GoDaddy have refused service to neo-Nazis. Other tech companies have banned white supremacist content.

Banning content is one thing; selling ads to fake news sites intent on undermining U.S. elections is quite another. Ads are revenue, and revenue might be considered sacrosanct.

That’s not really the problem. Machines cannot immediately identify intent or fake news. That just might take a human being, and too few are employed to monitor 83 million fake profiles and 4.75 billion pieces of content shared daily. (Click here for more Facebook stats.)

According to a statement by Facebook about the sale of ads to Russian trolls, Chief Security Officer Alex Stamos stated that about $100,000 in advertising resulting in 3,000 ads were connected to 470 fake news accounts. “Our analysis suggests these accounts and Pages were affiliated with one another and likely operated out of Russia,” he added, noting the content “appeared to focus on amplifying divisive social and political messages across the ideological spectrum — touching on topics from LGBT matters to race issues to immigration to gun rights.”

That type of content triggers division. And division has widened since the 2016 presidential election. Fake content didn’t have to identify presidential candidates Donald Trump and Hillary Clinton because the algorithms directed social media messages to targeted groups who would like or be offended by sensationalized messages.

Perhaps the most pernicious fake news story on social media was Clinton using a pizza shop to run a child sex ring.  A man with an assault rifle actually believed that and entered a pizza shop to shut down the alleged sex trade.

Another fake news account alleged thousands of people at a Donald Trump rally chanted, “We hate Muslims, we hate blacks, we want our great country back.”

Interpersonal Divide in the Age of the Machine focuses on Facebook and other social media platforms that datamine users, violating privacy. Here’s an excerpt that introduces Facebook terms of service, which hardly any user reads:

The question we now confront in the age of the machine is whether our devices enjoy more autonomy than humans pursuant to terms of service that allow constant datamining and intrusion and to which we have agreed, typically without taking time to read the fine digital print.

Interpersonal Divide in the Age of the Machine covers Facebook extensively, in addition to how social media collect and sell information on users.

Author Michael Bugeja was among the first to question Facebook in 2006 in The Chronicle of Higher Education with his watershed article, “Facing the Facebook.

Register is right to question Iowa data centers

The Des Moines Register is questioning tax and other incentives given to major technology companies relocating in Iowa, including Facebook, Google, Microsoft and now Apple.

Reporter Kevin Hardy’s article,  “Will Iowa’s giant data centers spur growth in technology jobs here?,” recounts how Iowa has “attracted billions in new data center investments over the last decade, most recently with Apple’s announcement of a $1.4 billion data center planned for Waukee.”

Unlike other media that simply recount corporate relocations with a theme of job creation, the Register looked more closely at the situation. This stands in contrast to some newspapers in Wisconsin that merely recited the Trump administration’s announcement of Foxconn opening a $10 billion facility in that state.

See my post, “FOXCONN IN WISCONSIN, TRUMP? Say Hi to C-3PO!,” noting how that company fired 60,000 Chinese workers, replacing them with robots.

Interpersonal Divide in the Age of the Machine covered the Foxconn firing in depth, noting robots don’t need health care. “What they do, primarily, is replace workers, degrade existing jobs, lower wages and reduce incomes (and hence taxes), leading to budget shortfalls in the name of corporate profit.”

As far as tech companies and data centers go, Interpersonal Divide cites an article titled “Four fundamentals of workplace automation,” published in the senior management journal, McKinsey Quarterly:

“As digital technologies automate many of the tasks that humans are paid to do, the day-to-day nature of work will change in a majority of occupations. Companies will redefine many roles and business processes, affecting workers of all skill levels. Historical job-displacement rates could accelerate sharply over the next decade.”

The Register was right on the money, or lack thereof for Iowa taxpayers, in stating:

“Data centers deliver big capital investments and fuel short-term employment in construction. But once online, the highly automated operations don’t require masses of workers. That dynamic has put renewed scrutiny on Iowa’s economic development practices, as tech giants continue to reap millions in taxpayer incentives to build here.”

Lawmakers in Iowa and elsewhere should read the Register article and hold tech companies accountable by reviewing incentives and carefully measuring the benefit to Iowa taxpayers. When we give tax breaks to companies that employ more machines than people, we are adding to the burden of ordinary citizens, including ones in the future who will be replaced by automation.

Believe in Institutional Bias? Algorithms Just Amplified That!

The technology publication, WIRED, just published results of a study affirming what the new edition of Interpersonal Divide in the Age of the Machine warns against: institutional (in this case, digital) bias that only amplifies existing stereotypes.

According to WIRED, two “prominent research-image collections—including one supported by Microsoft and Facebook—display a predictable gender bias in their depiction of activities such as cooking and sports. Images of shopping and washing are linked to women, for example, while coaching and shooting are tied to men.”

This is nothing new. The discovery of “algorithmic racism” is covered extensively in Interpersonal Divide. Here’s an excerpt from the book published by Oxford University Press:

If you believe that institutional racism exists, that systems and organizations over time believe falsehoods about under-represented groups, then imagine the long-term consequences if such bias is coded in and programmed into machines. For instance, if machines compile data suggesting that a certain race, gender and age of people living in a given location may have a higher inclination for wrongdoing, and that person happens to wander into a wealthier section of the neighborhood, merchants equipped with apps might be prone to mistake innocent shoppers for potential shoplifters, depriving them of service or worse, accusing them of crimes.

Interpersonal Divide documents algorithmic racism across digital platforms and datasets, including decisions associated with social justice, such as determining whether inmates should be granted parole. (Penal boards in half the states use algorithms in parole hearings.)

Once again, as it did in the first edition published in 2005 by Oxford University Press, Interpersonal Divide is challenging the widespread but dubious belief in technology as liberating. In fact, the new edition asserts that “data science” is not science at all but a mirror of what society believes to be true at a given time in culture and place. (We used to call that “social mores,” as in women belong in the kitchen–something the WIRED article about algorithms seems to affirm.)

In the age of machines, “place” also includes “cyberspace,” as in the UVA study referenced above that “discovered” bias about women in consumer platforms associated with Facebook and Microsoft.

As the WIRED article attests, “As AI-based systems take on more complex tasks, the stakes will become higher. ”

Tell that to inmates and under-represented groups depicted in news and digital media photos in stereotypical ways associated with social mores replete with bias, racism and sexism through the literary era to present day.

To order a copy of Interpersonal Divide in the Age of the  Machine, $19.95, click here.

FOXCONN IN WISCONSIN, TRUMP? Say Hi to C-3PO!

President Donald Trump and Wisconsin Gov. Scott Walker are touting plans by a Chinese technology firm with a checkered past that plans to open a $10 billion facility in the state, reportedly creating 13,000 jobs in 2020.

As CNN reports, “Foxconn’s estimate on jobs was more conservative. In a statement, the company said the project will create 3,000 jobs with the ‘potential’ to generate up to 13,000 new jobs.”

Trump and Walker ought to read the fine print.

What few politicians are mentioning is how Foxconn routinely replaces workers with robots. When it comes to people, the company has been the focus of several investigations on inhumane treatment of employees.

Interpersonal Divide in the Age of the Machine covers Foxconn as well as chapters on robotics replacing humans at the workplace:

In 2016, Foxconn Technology Group, an Apple and Samsung supplier in China, replaced 60,000 workers with robots. If any country should be concerned about automation, China might top that list with its population of 1.35 billion people. In reporting the mass firing—a population larger than Pensacola, Florida—the British Broadcasting Company noted that economists “have issued dire warnings about how automation will affect the job market, with one report, from consultants Deloitte in partnership with Oxford University, suggesting that 35% of jobs were at risk over the next 20 years.” See: http://www.bbc.com/news/technology-36376966

Foxconn also has a dubious past associated with forced labor, as this New York Times investigation revealed.  There was even a rash of suicides associated with Foxconn, as another Times story disclosed in 2010. While the company has made significant progress in employee work environments, its interest in robotics hasn’t waned or been extensively reported of late in the U.S. media.

This is an essential discussion as the state of Wisconsin may be paying as much as $2 billion in incentives, according to a report in the Chicago Tribune. The Tribune was one of the few U.S. newspapers to cite the mass firing of Foxconn workers in China to increase profit by replacing them with robots.

In some sense, given allegations in the past on how Foxconn treated its employees, as if they weren’t human, perhaps using robots has more appeal.

In any case, journalists need to monitor how robotics are going to be used in the Wisconsin facility. They should cite how the company uses them in other facilities around the globe. Why would a Midwestern state be any different when it comes to the bottom line?

After all, robots don’t need health care. They don’t commit suicide or complain about slave labor.  (In fact, there is an offensive technology term for this trait.) What they do, primarily, is replace workers, degrade existing jobs, lower wages and reduce incomes (and hence taxes), leading to budget shortfalls in the name of corporate profit. In this case, those profits go to China’s coffers and not American ones.

 

Facebook Quizzes Invite Spammers and Hackers

Facebook quizzes often appeal to our curiosity or ego or seemingly provide entertainment during lulls in social media posts or other routine activities, using algorithms to ascertain who your secret admirers might be or what celebrity most fits your profile.

You take the quiz, which asks that you log on with your Facebook credentials and then “like” the site and share with your friends. They take the quiz, too, and share with their friends, and all the while information from your smartphone is being harvested and sold to third-parties.

Some of the third-parties are spammers and hackers. Some of those spammers and hackers also offer their own invasive quizzes, and if you take those, you are putting your device and your privacy at risk or worse, inviting malware.

Unlike spam that arrives via email, asking you to click on a link, taking these tests IS the link. So you might not be aware of the hazards until it is too late.

Interpersonal Divide in the Age of the Machine contains several chapters on social media, including information on Facebook spammers and hackers and loss of privacy. Here’s an excerpt about Facebook datamining:

The question we now confront in the age of the machine is whether our devices enjoy more autonomy than humans pursuant to terms of service that allow constant datamining and intrusion and to which we have agreed, typically without taking time to read the fine digital print. 

Click the photo above or this link to hear a special report from an NBC affiliate. WFLA’s Lindsey Mastis interviews an expert from the Florida Center for Cyber Security about the hidden dangers of Facebook quizzes.

The Center has its own video on ways to protect yourself in cyberspace. Click the photo below to view it  or paste this into your browser: https://youtu.be/sdpxddDzXfE.

 

 

 

Turns Out There is a “There” There

A phone call, rather than texts, played a larger role in the case of Michelle Carter, convicted of involuntary manslaughter in the well-publicized “texting” suicide case, according to an article in the New York Times:

For a case that had played out in thousands of text messages, what made Michelle Carter’s behavior a crime, a judge concluded, came in a single phone call. Just as her friend Conrad Roy III stepped out of the truck he had filled with lethal fumes, Ms. Carter told him over the phone to get back in the cab and then listened to him die without trying to help him.  That command, and Ms. Carter’s failure to help, said Judge Lawrence Moniz of Bristol County Juvenile Court, made her guilty of involuntary manslaughter. ….

To be sure, texting evidence was presented in her trial. Those texts are deplorable and an indication of how technology can be used to trigger horrific consequences.

Here’s a sampling  of Carter texts published by CNN:

Carter: “You’re gonna have to prove me wrong because I just don’t think you really want this. You just keeps pushing it off to another night and say you’ll do it but you never do”
Carter: “SEE THAT’S WHAT I MEAN. YOU KEEP PUSHING IT OFF! You just said you were gonna do it tonight and now you’re saying eventually. . . .”
Carter: “But I bet you’re gonna be like ‘oh, it didn’t work because I didn’t tape the tube right or something like that’ . . . I bet you’re gonna say an excuse like that”
Carter: “Do you have the generator?”
Roy: “not yet lol”
Carter: “WELL WHEN ARE YOU GETTING IT”

American Civil Liberties Union of Massachusetts believes the texting aspect of this case is a chilling expansion of criminal law in Massachusetts, according to the NYT article. Texts, in this case, received worldwide attention because of its prevalence in society. While the phone call may have won the day for the prosecution, the idea that virtual reality as a dual reality is more chilling because it puts defendants in two places at the same time, a violation of physics and defense strategy in any criminal case.

The second edition of Interpersonal Divide goes into depth about  how texts harm children at home and school. There are two chapters devoted to that. However, Interpersonal Divide also analyzes the Internet according to the tenet of there being no “there” there.  That’s a phrase by coined by the writer Gertrude Stein speaking about her childhood home in California no longer existing. She used the phrase about geographical reality, but many social critics revived it to apply to the Internet.

Interpersonal Divide believes the ACLU makes a valid point in asserting that texts used as evidence to incriminate merely because they prod others to commit an illegality is a serious expansion of criminal law. Moreover, the news media apart from the NYT and a few other outlets did not make clear it was the phone call and not the texts that swayed the judge in his decision.

On appeal, it may be argued that texts as evidence should not have been used to the extent the prosecution did in establishing a pattern of guilt leading to conviction of Michelle Carter who also sent other texts like these, which did not attract the same media coverage:

Carter: “But the mental hospital would help you. I know you don’t think it would but I’m telling you, if you give them a chance, they can save your life”
Carter: “Part of me wants you to try something and fail just so you can go get help”
Roy: “It doesn’t help. Trust me”
Carter: “So what are you gonna do then? Keep being all talk and no action and everyday go thru saying how badly you wanna kill yourself? Or are you gonna try to get better?”
Roy: “I can’t get better I already made my decision.”


We will follow up on the Interpersonal Divide website on any case that takes into account our demurs articulated in this post.