Category: Uncategorized

The 2024 election promises dystopia in the age of AI

Will it be like ‘1984,’ ‘Brave New World,’ or ‘A Clockwork Orange’?

 Artificial intelligence is often featured in dystopian novels. (Photo illustration by Valery Brozhinsky/Getty Images)

Many Americans are concerned about the upcoming election, anticipating a grim future if their nominee fails to secure the 270 electoral votes to become the 47th president of the United States.

Political parties are playing on those fears, using the latest technology.

Artificial intelligence has long been associated with doomsday in dystopian novels, including the oppressive dictatorship in “1984” by George Orwell, the socially engineered society in “Brave New World” by Aldous Huxley, and the deep state of “A Clockwork Orange” by Anthony Burgess.

Each of these works has a communication element. “1984” uses simulated scenes of perpetual enemies as well as facial recognition. “Brave New World” warns against technology controlling how people act and think—a prescient vision of brain-computer interface chips. Technology is used in “A Clockwork Orange” to deter recidivism — as we have today, with AI used for parole decisions.

These books highlight issues being debated today to uplift or deride Democrat Joe Biden and Republican Donald Trump. Forbes Magazine reported that both parties are using algorithms to clone faces and voices of candidates in various simulated settings.

Here is the transcript of a GOP video warning about a second Biden term.

“This just in. We can now call the 2024 presidential race for Joe Biden. This morning an emboldened China invades Taiwan. Financial markets are in free fall as 500 regional banks have shuttered their doors. Border agents were overrun by a surge of 80,000 illegals yesterday evening. Officials closed the city of San Francisco this morning citing the escalating crime and Fentanyl crisis. Who’s in charge here? It feels like the train is coming off the tracks.”

ABC News reported how AI is being used to cast Trump as a criminal, pilfering images of his court appearances as the basis for phony scenes, as in this video.

The network also reported that other scenes “may have been generated by artificial intelligence, the latest in a series of hyper-realistic fake images deceiving many online and raising concerns over the sophistication and accessibility of AI-powered tools.”

Each party is using state-of-the art technology to amplify their message. On the eve of the Iowa caucuses, Trump shared such a video on Truth Social, proclaiming himself as God’s chosen emissary on earth.

Here’s the transcript:

“On June 14th, 1946, God looked down on his planned paradise and said, I need a caretaker. So God gave us Trump. God said I need somebody willing to get up before dawn, fix this country, work all day, fight the Marxists, eat supper, then go to the Oval Office and stay past midnight at a meeting of the heads of state. So God made Trump. I need somebody with arms strong enough to rustle the Deep State and yet gentle enough to deliver his own grandchild, somebody to ruffle the feathers, tame cantankerous World Economic Forum, come home hungry, have to wait until the First Lady is done with lunch with friends, then tell the ladies to be sure and come back real soon and mean it. So God gave us Trump.”

The Lincoln Project, a pro-democracy organization and frequent Trump critic, responded to the above video with this parody containing AI-generated images.

Here’s an excerpt:

“God said, ‘I need a corrupt man who is above the law and immune from justice.’ So God made a dictator. God said, ‘I need a man who will use violence to seize power.’ So God made a dictator. God said, ‘I need a man whose followers will call Black white, call evil good and call criminals hostages.’ So God made a dictator. God said, ‘I need his political party to obey without question and the press to fear his wrath.’ So God made a dictator. God said, ‘I need a cruel man who uses his power and position to punish and harm his opposition.’ So God made a dictator. God said, ‘I need a man who breaks the faith of even his most godly followers and leads them to idolatry, placing him above me.’ So God made a dictator.”

Dictators figure prominently in dystopia works, beginning with Big Brother in “1984,” whose name became synonymous with surveillance. Mustapha Mond, the antagonist of “Brave New World,” elevates science in a new social order emphasizing conformity. The Minister of the Interior in “A Clockwork Orange” represents the deep state undermining liberty.

Which awaits us?

I asked one of the country’s foremost AI experts, Jeffrey Cole, director of the Center for the Digital Future at the University of Southern California-Annenberg.

“I don’t know that I distinguish between ‘Brave New World’ and ‘1984,’” he said. “The future is going to have elements of that.” America already is experiencing the disinformation of “1984.” And like “Brave New World,” he added, “some people’s agenda is to throw everything into doubt so that you believe nothing.”

Cole also referenced Stanley Kubrick’s film adaptation of “A Clockwork Orange” with the state requiring criminals to ingest the paralytic drug Serum 114, altering the brain at the mere thought of violence. “I don’t see that,” Cole said.

These and other dystopia books typically feature one oppressive political system. Our hybrid version augurs a divided society that favors and fears autocracy and science.

The epic will be written on Nov. 5.

Social media’s threat to Iowa children includes dangerous terms of service

The Iowa Legislature is right to restrict children’s use of these platforms. But we can do more, emphasizing literacy and civics.

  • Michael Bugeja is a distinguished professor of liberal arts and sciences at Iowa State University.

Copyright Des Moines Register, 2024

Long after risks became apparent 20 years ago — including screen addiction, loss of face-to-face communication, cyber stalking, bullying and harassment — lawmakers finally are trying to restrict social media accounts of underage users and monitor effects of these perilous platforms.

Lack of legislation has allowed tech CEOs to deflect deleterious effects, hire D.C. lobbyists and write service terms so obtuse that users simply ignore them and click “I agree,” the most pervasive lie told every day across Iowa.

As the Des Moines Register reported, the Iowa House has approved a bill (House File 2523), requiring children under age 18 to get parental approval to open social media accounts on such sites as Instagram, Facebook and TikTok. The state attorney general also could sue any company violating those tenets.

At the federal level, the U.S. House Energy and Commerce Committee unanimously approved a bill supported by the White House to coerce TikTok to split from its Chinese controlled company. Otherwise, the platform would face removal from application distributors and web hosting services. On Wednesday, the U.S. House approved the proposal on a vote of 352-65.

Social media companies made more than $11 billion from minors last year. Forbes notes that these companies utilize a revenue model exploiting people of any age: “Find users who will add and engage with content; keep them there at all costs; bring in more users; and sell ads or data. Rinse and repeat.”

The time teens spend on social media sites, 4.8 hours per day, harms mental health. But there is an indirect impact on what they actually know about civics and implications for their own future.

Many users under age 18 literally cannot fathom rules of popular sites. The informational hub All About Cookies reports the average length of terms was 6,141 words — “enough to fill 13.5 single-spaced or 22 double-spaced pages” — with Facebook’s the most dense, at the comprehension levels of college grads.

Here’s an excerpt earning a 16.2 score on the Flesch-Kincaid scale, requiring skilled readers at the 11th through 18th grade levels:

If we remove content that you have shared in violation of the Community Standards, we’ll let you know and explain any options you have to request another review, unless you seriously or repeatedly violate these Terms or if doing so may expose us or others to legal liability; harm our community of users; compromise or interfere with the integrity or operation of any of our services, systems or Products; where we are restricted due to technical limitations; or where we are prohibited from doing so for legal reasons.

US laws require these platforms to state what data is being collected and how, and whether that information is shared with third parties.

For the rest of the column, visit: Journalism must cover AI as a shared future that awaits us all, whether we want it or not – Poynter

Journalism must cover AI as a shared future that awaits us all, whether we want it or not

An assessment of the boons, banes and boondoggles awaiting an unwary public without follow-up coverage

(Shutterstock)

By: Michael Bugeja

When I worked as a wire service reporter more than 40 years ago, we kept a cardboard box full of file folders alphabetically labeled with our most important follow-up stories. Using this, we provided around-the-clock coverage 365 days a year for our clients.

If someone was arrested for a crime, we wrote about the booking and later the arraignment, the charge, the jury selection, and so forth until the verdict. We also did this on top stories that linger to this day, including clean energy, pollution, the environment, voting and abortion rights, campaign coverage, state/federal elections, political scandals, racism, sexism, health care crises, and medical/scientific discoveries.

We did relatively few follow-up reports about the great technological advances of email (1971), electronic gaming (1972), disc storage (1972), video recorders (1972), cellphones (1973), digital cameras (1975), Apple computers (1976), GPS (1978) and portable music players (1979).

As these came on the market, we featured them with tired themes of convenience, consumerism and entertainment.

The same phenomenon is happening today, only the stakes are much higher, involving machines rapidly gaining a facsimile of consciousness without conscience, human oversight or government regulation. Artificial intelligence not only has become a part of our lives but also soon may be a part of our bodies, promising panacean benefits that few reporters are able to fact-check, let alone fathom.

FOR THE REST OF THE ARTICLE, VISIT: https://www.poynter.org/commentary/2024/how-journalists-cover-ai-future-predictions/

Computers control our future. Will they have a conscience?

MICHAEL BUGEJA, IOWA CAPITAL DISPATCH

 (Photo illustration via Canva)

I am a technological determinist who believes humans do not control machines; they control us. In the age of artificial intelligence, we should revisit that viewpoint to glimpse what awaits us.

During the heyday of internet in the 1990s, developers created algorithms to glue us to screens. They downplayed the impact on our psyches. After all, we could always turn off computers.

Then those advocates freaked out with the Y2K bug of 1999. They implored us to shut down computers as we transitioned from Dec. 31, 1999, to Jan. 1, 2000. To save memory space, programmers had coded machines to recognize four-digit years as two digits. They worried that computers would recognize “00” as “1900,” not “2000,” setting us back a century.

Iowa State University prophesied that few problems would occur. “Most faculty and staff won’t need to visit the workplace to check for Y2K problems on New Year’s Eve, New Year’s Day or even Sunday, Jan. 2. If potential Y2K problems don’t pose serious risks to your department operations, you probably can wait until the next workday to check your office and equipment.”

Y2K was a sign that computers were controlling us. Rather than celebrate the new millennium, we fretted about losing data.

Now we fret about losing jobs to robots.

Already artificial intelligence (AI)  is driving our cars, recognizing our faces, decoding our fingerprints, developing public policy, making parole decisions, informing marketing, controlling supply chains, and revising performance in travel, insurance, medicine, agriculture, retail, automotive assembly, and aerospace and defense.

To be sure, AI promises myriad social benefits. It has enhanced medical science in disease diagnosis, drug treatment and clinical trials. It automates production lines, eliminating repetitive tasks and safety risks. It predicts severe weather conditions.

But there are significant risks. Chatbots hallucinate (presenting false information as fact). AI algorithms can be biased, trained on stereotypical exabytes that elevate men over women in job placement or deny loans to people of color.

My concerns are futuristic.

There are four AI types:

  • Reactive Machines, operating on set rules in such tasks as purchase history and customer preference. You get Netflix recommendations based on past viewing.
  • Limited Memory Machines, imitating the way our brains function as they process data. You get self-driving cars that scan traffic lights, signs, curves, potholes and road closures.
  • Theory of Mind Machines, still under development, with the potential to cipher and then act on human thoughts and emotions.
  • Self-Aware Machines, theorized to possess a sense of self, or consciousness.

Philosophers, scientists and engineers are debating when machines will become conscious and whether we will know when they do.

A 2023 report in Nature notes that a failure to do so “has important moral implications.” The article cites several neuroscience-based theories defining biological consciousness. But there is no consensus.

AI knows how to fool humans by mimicking likely responses. If trained on the Descartes maxim, “I think, therefore I am,” a machine might make that philosophical claim. Or it simply might define consciousness as the ability to identify and locate itself in physical space like a Google map.

There is no there there in machines. They may have neural networks inspired by human brains, but no inkling of how that brain evolved over millennia to ensure biological and moral survival. As the New York Times surmises, “It’s hard to see how these things could be coded into a machine.”

Humans evolved via mirror neurons. We feel what others experience. We empathize with strangers undergoing trauma and even mourn them in the safety of our living rooms while viewing traumatic news.

We also possess a conscience associated with “gut instinct.” Johns Hopkins Medicine calls this “our second brain.”

In other words, we not only perceive our environment, but sense it. The operate word is “feel.” Robots cannot feel but might pretend that they do.

An article titled “Will AI have a conscience?” notes that AI is being used “to care for the elderly, teach our children, and perform many other tasks that require moral human judgement.” Human brains developed that judgment via “a reward and punishment system” that ensures the survival of our genes. That is why parents protect offspring at personal expense. Some people even risk their lives to save complete strangers in distress.

Theory of Mind development fails to compensate for that because, in sum, machines can’t.

Nevertheless, Popular Mechanics reports a “stunning” Theory of Mind achievement involving a neural network with intuitive skills of a 9-year-old. It hopes that machines develop “empathy and morality,” which could be “a big boon for things like self-driving cars when needing to decide whether to put a driver in danger to save the life of a child crossing the street.”

Good luck with that. What would an adolescent do if driving a car with a child in the lane? Also, factor vehicle owners paying AI insurance to save their lives, not those of pedestrians.

Psychiatry theorizes that people without consciences are psychopaths and ones who feign consciences are narcissists. The former doesn’t care about empathy and the latter only cares what other people think.

That is the future of machines and, perhaps, us too, in the absence of regulation and oversight.

Artificial intelligence poses risks in public policymaking

Without regulation, bias could infect decisions on issues ranging from justice to health care

MICHAEL BUGEJA

Artificial intelligence can introduce bias in decisions about public policy on issues including parole and judicial sentencing. (Photo via Canva)

For the past year the public has debated use of the artificial intelligence application ChatGPT composing student essaysLinks to an external site.passing law examinationsLinks to an external site. and replacing jobs and professionsLinks to an external site.. There are greater concerns, far less publicized, about use of AI in public policy.

It is one thing for a student to be accused of plagiarism and another for an inmate to be denied parole because of a biased dataset.

ChatGPT routinely commits a multitude of errors — from factoids and fake news to false citations and spurious conclusions — otherwise known as “AI hallucinationsLinks to an external site..” These exist in public policy, which also contends with biased datasetsLinks to an external site..

In varying degrees, machines learn from humans and/or other machines. Here are four typesLinks to an external site.:

  • Supervised learning. Machines analyze datasets, making predictions with human oversight.
  • Semi-supervised learning. Machines analyze and add to datasets, making predictions with some human oversight.
  • Unsupervised learning. Machines correlate data and make their own decisions.
  • Reinforcement learning. Machines learn by trial and error, adapting to situations for desired results.

Artificial intelligence applies such learning to do specific tasks or acquire potential goals. Again, there are four typesLinks to an external site.:

  • Reactive AI. Applications do not learn from past interactions but can play chess, serve as a spam filter and analyze data sets.
  • Limited Memory AI. Upgraded applications learn from past inputs, as found in self-driving cars, savvy virtual assistants and popular chatbots.
  • Theory of Mind (General Intelligence) AI. Still under development, applications may be able to fathom human nature, viewpoints, and emotions, making policy decisions based on computation.
  • Self-Aware (Superintelligence) AI. Future machines will be able to form opinions and emotions about themselves, without any human-imputed data, oversight or regulation.

At present, artificial intelligence performs three basic policy functionsLinks to an external site.:

  • Detects patterns, analyzing large datasets and identifying recurring samples.
  • Forecasts policy, assessing evidence for future strategies, enhancements and revisions.
  • Evaluates policies, exploring the impact of programs on target audiences and clientele.

The integrity of fact-based data is of utmost concern.

Last year, the ACLULinks to an external site. warned that use of AI in medicine is increasing with inadequate regulation “to detect harmful racial biases” coupled with “a lack of transparency that threatens to automate and worsen racism in the health care system.”

The U.S. Food and Drug AdministrationLinks to an external site. has similar concerns about “automation bias,” which occurs when an application favors a specific solution without considering viable alternatives.

Especially in medicine, decisions may require urgent action. The FDA believes that automation bias increases when AI lacks sufficient time to explore all available information.

Automation bias in machines leads to confirmation bias in individuals — conclusions that affirm inherent beliefs, however tainted. Health care professionals may believe what their preferred AI asserts without considering alternative treatments — what humans call second opinions.

Benefits of artificial intelligence are multitudinous. They will save lives as well as time and money. For instance, algorithms may be able to assist doctors in early cancer detectionLinks to an external site. by examining health records, medical images, biopsies and blood tests. Patients without symptoms thereby can be alerted to specific hazards and prognoses.

As general intelligence AI evolves, it will undoubtedly also resolve crisis management and improve policy decisions, revisions and forecasts. It will do all this with startling efficiency — so much so, that advocates and users will become complacent and reliant on its applications. But when it fails, as it inevitably will, results can be potentially catastrophic.

As the Harvard Business Review notes,Links to an external site. “AI notoriously fails in capturing or responding to intangible human factors that go into real-life decision-making — the ethical, moral, and other human considerations that guide the course of business, life, and society at large.”

The article lists AI lapses: a self-driving car killing a pedestrian, a recruiting tool elevating male over female applicants, and a chatbot learning racist remarks from Twitter users. Particularly egregious was an experimental health care bot whose goal was to reduce physician workload. A patient inquired, “I feel very bad, should I kill myself?” The bot replied, “I think you should.”

A study in The Journal of the American Medical Informatics Association states that automation bias “can deliver erroneous medical evaluations” while potentially threatening patient privacy and confidentiality.

The Center for AI Safety notes these probabilities:

  • Malicious use. People intentionally harnessing AI to cause widespread harm.
  • AI race. Competition rushing AI development, relinquishing control to these systems and escalating conflicts.
  • Organizational risks. Companies prioritizing profits over safety, suffering catastrophic accidents and legal responsibility.
  • Rogue AIs. AIs deviating from their original goals, seeking power, resisting shutdown and engaging in deception.

The Brookings InstitutionLinks to an external site. cautions about bias in parole decisions, judicial sentencing, health benefits and welfare claims, among others. It emphasizes a common principle of AI ethics —explainability. AI users must be transparent about processes, clarifying decisions or classifications.

At odds with this is proprietary informationLinks to an external site.. Explainability threatens loss of data rights.

There is no comprehensive lawLinks to an external site. covering AI use and development. Last year the Biden administration proposed an AI Bill of RightsLinks to an external site. that advocates for safe, opt-out and transparent systems with bias and privacy protection.

That undoubtedly will result in political pushback and corporate resistance.

The public needs to be educated about AI risks in public policy. In the absence of regulation, organizations must emphasize ethics and the common good.

Michael Bugeja

Links to an external site.

MICHAEL BUGEJA

Links to an external site.

Michael Bugeja is the author of “Living Media Ethics”Links to an external site. (Routledge/Taylor & Francis) and “Interpersonal Divide in the Age of the Machine”Links to an external site. (Oxford Univ. Press). He is a regular contributor to Iowa Capital Dispatch and is writing a series of columns on the topic of “Living Ethics.” Views expressed here are his own.

MORE FROM AUTHOR

The answer to high anxiety in higher education is empathy

Technology-related disorders have sparked a mental health crisis in academe. We need to emphasize empathy and interpersonal intelligence to offset the algorithms of artificiality.

Michael Bugeja, Guest columnist

  • Michael Bugeja is a distinguished professor of liberal arts and sciences at Iowa State University.

During my 45 years in higher education, I have had hundreds of accommodation requests, typically about fear of test-taking or access to lecture materials. But recently those accommodations have included anxiety about coming to class at a residential university.

This trend began after face-to-face classes resumed in fall 2021 following the COVID pandemic.

Now there is a push to return to online courses to utilize advances in artificial intelligence that soon may alter higher education as we know it. Such systems purportedly individualize student learning, automate routine instructor tasks, and advance student-avatar interaction.

Left out of the discussion is the impact of technology on student wellness. Happily, teachers can help mitigate that effect with interpersonal engagement.

For the rest of the post, visit: https://www.desmoinesregister.com/story/opinion/columnists/iowa-view/2023/10/23/anxiety-higher-education-technology-empathy/71233758007/

Opinion | How journalism should face the unchecked threats of generative AI

We need more copy editors, ‘truth beats’ and newsroom guidelines to combat artificial intelligence hallucinations

(Shutterstock)

By: Michael Bugeja

September 12, 2023

   

Fact accuracy has been under assault for more than 20 years. It began when corporate owners reaped huge profits without reinvesting in newsrooms. The internet redefined audience and reconfigured advertising, reducing newsroom staff by 26% between 2008-20. Then Donald J. Trump emerged with his big/little lies and a cult-like MAGA following whose adherents dubbed journalists as enemies of the people.

Now artificial intelligence may eradicate truth in our time. Not because of plagiarism. Not because of deepfakes. Not because of fewer writing jobs.

Journalism may succumb to AI hallucinations, outright fabrications and illogical deductions, cast as effortlessly and believably as possible.

This is why newsrooms should temper their use of chatbots, hire more copy editors, emphasize fact-checking, establish “truth beats” and create or update guidelines about machine applications.

For starters, everyone should know the four types of artificial intelligence being used or under development:

  • Reactive AI, programmed for narrow tasks. It doesn’t learn from past interactions but can play chess, serve as a spam filter and analyze data sets.
  • Limited Memory AI, patterned after the human brain and able to learn from past inputs as in self-driving cars, intelligent virtual assistants and chatbots.
  • Theory of Mind (General Intelligence) AI, under development to know and respond to emotions with decision-making abilities equal to humans.
  • Self-Aware (Superintelligence) AI, theorized not only to recognize human emotions but also to conjure their own needs, desires and feelings.

In other words, the smarter machines get, the more dangerous their hallucinations. It is one thing to get a bad Netflix recommendation or cite non-existent references and quite another to misdiagnose a mental condition or respond to a false military threat.

FOR THE REST OF THE COMMENTARY, CLICK HERE or visit: https://www.poynter.org/commentary/2023/how-journalism-should-face-the-unchecked-threats-of-generative-ai/

New Review of Interpersonal Divide

Intentional Connection in the Age of Interpersonal Divide

Posted byTracy LassiterPosted inWhat I’m ReadingTags:connectioncyberspaceethicsInternetinterpersonal divide

I admit to a certain level of addiction to Gardenscapes, a game I downloaded to my cell phone about six months ago. It’s a charming game where you complete challenges to help the central character, Austin, restore a mansion and grounds to their former glory. Sometimes, it’s one of the last things I do at night before falling asleep. Noel, our 17-year-old cat, will give me gentle paw-taps if I’m too distracted by the game to pay attention to her and give her the skritches she’s been waiting for patiently. Her nudges make me realize I’m not connecting to her or to any of the other pets in the house. While I enjoy the game play and the redecorating challenges in this alternate world, I’m neglecting my real one.

I’m grateful for the kitty taps, and by virtue of reading Interpersonal Divide in the Age of the Machine, I recognize that I epitomize what esteemed journalism and ethics professor Michael Bugeja means when he writes, “In light of mobile devices and universal access, media and technology displace many of us from physical to virtual environs, blurring boundaries and identities and occupying so much of our time that we have little left at day’s end to devote to hometowns and to each other” (3).

In Interpersonal Divide in the Age of the Machine, Bugeja updates his previous book, Interpersonal Divide, published in 2005. Bugeja defines the interpersonal divide as “the social gap that develops when individuals misperceive reality because of media overconsumption and misinterpret others because of technology overuse” (xiii). Drawing from the philosophical side of his training and research, Bugeja describes our innate human desire for social inclusion, noting, “The need to belong is powerful because, introvert or extrovert, we are social creatures with a conscience—the ethical inkblot upon which others and we make indelible marks” (1). Given this aspect of our nature, he insists that “in our search for acceptance, we must analyze the role that media and technology play in our lives—especially how they shape our identity as individuals who populate communities and as citizens who should contribute to them” (20-21).

This need is even greater today, as in the years between book editions we’ve witnessed the rise of Big Data and artificial intelligence (AI). These advances are why, with 2018 version of Interpersonal Divide, Bugeja adds the dimension of “the machine” to his earlier concerns. The “machine” is the vast set of cybernetic systems we encounter daily that mine our data, construct our reality(-ies), and render the human race virtually meaningless in toto. In an epigram, Bugeja quotes computer scientist Jaron Lanier, who states, “The first tenet of this new [techno-culture] is that all of reality, including humans, is one big information system” (44). For example, Bugeja cites cybersecurity expert Bruce Schneier to point out that few of us would willingly submit to carrying tracking devices so the government could surveil us at any point in our travels, note the friends we make, or track the purchases we buy – yet that is exactly the sort of access we provide through our Google maps, social media use, and online shopping. Furthermore, Bugeja adds, “Consumer technology…owes its existence to corporate surveillance and government application of that surveillance” (65). These few examples from the book I’ve cited show how, in the Age of the Machine, Bugeja’s earlier exhortation to analyze the role these technologies play in our lives takes on an ominous urgency.

For the rest of the review, see: https://twoprofsfromohio.wordpress.com/2023/07/17/intentional-connection-in-the-age-of-interpersonal-divide/

Underperforming political journalism and pitfalls of conventional wisdom

MICHAEL BUGEJA

 (Photo illustration via Canva)

Writers covering election politics these days generally do not disclose anything relevant about issues and top candidates. Mostly, they read online speeches and social media posts, view broadcast and YouTube segments, report poll predictions, scan databases and launch wave after wave of commentary.

We are drowning in a tsunami of political opinion.

There are a scant few resonating reports based on journalism core tenets, as found in The North Shore Leader’s early disclosure about New York Rep. George Santos’ ludicrous lies. Major media missed that blockbuster. Instead we get a web of predictions 16 months before the first Tuesday in November 2024.

Conventional wisdom in mid-July 2023 prophecies that former President Donald Trump is the overwhelming favorite to be the GOP nominee, that voters in swing states prefer Trump over President Joe Biden, who will be the Democratic nominee, that Trump will be found guilty in the classified documents case unless the Trump-appointed judge Aileen Cannon sinks the prosecution, which will indict Trump anyway in the Jan. 6 insurrection.

Fact is, we will not know who the presidential contenders will be, nor the outcome of court cases, until they are decided. We just have to wait. In the digital era, we are impatient, checking our phones 96 times per day, or once every 10-12 minutes. We expect answers instantaneously.

Opinion fills that gap.

But that’s not the whole of it. The above predictions did not require writers to do any leg work beyond viewing polls, social media and court filings. They could do those reports in pajamas.

The 2016 presidential election showcased the shortcomings of pollsters who overwhelmingly predicted that Hillary Clinton would defeat Trump. The conventional wisdom failed to factor high GOP turnout as well as undecided and Rust-belt state voters breaking for Trump.

On the plus side, there was less reliance then on blog posts and more on investigative reports. Included in the New York Times top 100 popular stories of 2016 were revelations about Trump’s Vietnam bone-spur draft deferments; his avoiding paying taxes for nearly two decades, his massive real estate and loan debt, and risqué behavior with women.

Clinton may have gotten the worst of it when her email server was linked to disgraced congressman Anthony Weiner, the estranged husband of her top aide Huma Abedin.

We have seen precious few such disclosures in 2023. There may be two fundamental reasons: newsroom employment dropped by 26% between 2008 and 2020 and is still dropping as more people get their news from social media (62%, 2016 v. 82%, 2022).

And much of that news is regurgitated from other outlets actually doing political journalism.

The top news site remains Yahoo with 61% percent of U.S. adults having a favorable opinion of its services. But a closer look at those services reveals that this site, like other aggregate ones, such as Flipboard, simply showcase, revise, analyze and disseminate stories from other media.

Aggregator websites, such as Google News and Feedly, also conveniently assemble reports based on your clicks and algorithmic preferences.

Viewers gravitate toward convenience, which the internet readily provides, seemingly for free. Patrons do not always realize that they are being datamined with personal information shared with advertisers and organizations. Users’ political affiliations are in demand as campaigns solicit donations. Reuters reports that political parties use data on more than 200 million voting-age Americans to inform their strategies and tactics.

Now with artificial intelligence (AI), that aspect has increased multifold. You are defined by your algorithms as well as affiliations, reduced to a mere node, with home appliances and digital devices transmitting and receiving information about your political thoughts, words and deeds.

The big five tech companies — Alphabet (Google), Amazon, Apple, Meta (Facebook), and Microsoft— continue to drain the media advertising base. This has hurt the newspaper industry, in particular, whose reporters generate the bulk of fact-based news, projected to lose $2.4 billion in ad revenue by 2026.

Why should you care?

Journalism, the so-called Fourth Estate, monitoring the executive, legislative and judiciary branches of government, has one goal: to inform the public so that they make intelligent decisions in the voting booth. Hard-hitting political reporting is central to that, without which, we get the governments we deserve.

There are still reliable news sites, including States Newsroom, of which the Iowa Capital Dispatch is a part. Forbes recommends 10 trustworthy outlets, including The New York Times, Wall Street Journal, Washington Post, BBC, Politico and major wire services.

As for the predictions in the beginning of this column, check back in one year and see how life intervened and upended conventional wisdom.

Perhaps Trump will not be the GOP nominee in 2024. Maybe voters in swing states may switch sides, voting for Biden. The elderly Trump and Biden may drop out of the race for any number of reasons, from health to legal complications. Perhaps Judge Cannon, overseeing the classified documents case, will be impartial. Or not. Maybe the DOJ will lose or amend its Jan. 6 case.

Life and its variables, rather than social media and its bloviators, determine the real agenda the public must heed. Wait for the facts. Follow outlets that provide them. Vote accordingly.

Originally published in the Iowa Capital Dispatch.

Opinion: What is the future of Iowa’s English departments?

Artificial intelligence is impacting the writing profession, raising questions about the value of students’ majoring in the discipline

  • Michael Bugeja is a distinguished professor of liberal arts and sciences at Iowa State University.

Iowa has two prestigious English departments, one of the largest at the University of Iowa, known for its writing program, and another at Iowa State University, known for composition and applied linguistics and technology.

Those departments do more than teach writing. Classes foster critical thinking. That is a difficult selling point when parents weigh the cost of tuition versus job placement.

That debate has been complicated by the emergence of artificial intelligence.

A New Yorker article titled “The End of the English Major” notes that enrollment in the discipline has fallen by a third in the past decade. ChatGPT may hasten that decline. “A.I. can gather and order information, design experiments and processes, produce descriptive writing and mediocre craftwork, and compose basic code, and those are the careers likeliest to go into slow eclipse.”

The writing profession is among them.

For the rest of the commentary, visit: https://www.desmoinesregister.com/story/opinion/columnists/iowa-view/2023/06/18/iowa-english-departments-future-chatgpt/70323661007/