Technology, Cheating and Loss of Trust

The Houston Astros used technology to help win the 2017 World Series. Students cheat using cell phones, wireless earbuds, spyglasses and smart watches. The speed and stealth of technology are too tempting to resist. But the desire to win at all costs has its downside, too.

The Houston Astros baseball team may have won the 2017 World Series with a little help from technology, but the sign-stealing scandal had deeper repercussions than the penalties imposed by the Commissioner’s Office.

The penalties were the harshest possible under current rules: Houston Astros General Manager Jeff Luhnow and Manager A.J. Hinch were fired, and the team fined $5 million with loss of first- and second-round draft picks in 2020 and 2021.

The scheme involved use of a centerfield camera fixed on the opposing team’s catcher relaying signs to the pitcher: fast ball, change up, curve, and so on. The video was relayed to a monitor in a hidden space in the dugout. Once the sign was decoded, a trash can was banged to signal what the next pitch would be.

This had to be done within a second or two, but the speed of technology allowed it.

Once discovered, the scandal cast doubt on every game the Astros won with its cheating system. That was unfortunate, too, because the team was immensely talented and probably would have won the series without cheating.

As the New York Times noted, the effort wasn’t especially needed as in “2017 Houston hit .279 at home with 115 home runs and a .472 slugging average. On the road, where elaborate sign-stealing should theoretically have been more difficult, the Astros hit .284 with 123 home runs and a .483 slugging average.”

But the real damage was to the sport and, more specifically, to the business of baseball. According to another Times report, “The business of baseball depends on the public’s belief in the legitimacy of the competition. That is the implicit deal between the league and fans, and without that trust, everything falls apart.”

That’s the ethical lesson, too. Cheating obliterates trust. Often, it isn’t needed except to insure a winning season … or semester.

Last year Forbes published an article titled “How Technology Is Being Used By Students To Cheat On Tests,” describing how students use wireless earbuds connected to smartphones in backpacks with pre-recorded content related to exams. Other tech-related cheating involved Google glasses with pre-programmed answers and even smartwatches connecting to third-party off-site accomplices transmitting answers.

Temptation is part of the human experience. However, technology has spawned new strategies that some people just cannot resist because of ambition, greed or monetary reward.

When dealing with temptation, ethical people consider consequences, which often are greater than cheaters initially anticipate. They ponder the worst-case scenario and whether they are able to pay that price.

The price usually involves something worse than loss of a job or promotion, or a failing grade on a test or course; it can result in loss of trust, triggering irreparable harm to a person’s career or future.

Interpersonal Divide in the Age of the Machine focuses on how omnipresent technology undermines personal and professional values at home, school and work.

Deadly Censorship: China and Coronavirus

Whistlerblower physician Li Wenliang who warned the world about the deadly coronavirus and was punished by police for spreading rumors, contracted the disease and died in Wuhan Central Hospital. He was hailed a hero on the mircoblogging site Weibo, which carried the hashtag #IWantFreedomOfSpeech (now banned). His case shows the dangers of a world without journalism.

In the wake of his death, The Guardian reported “outrage and frustration felt across China over the initial cover-up of the deadly virus.” Some 1.5 billion Weibo users alone expressed their anger and grief on how Dr. Li had been treated.

According to the Guardian, Li was one of eight people detained for spreading rumors about the dangerous disease, with “the fates of the other seven, also believed to be medical professionals,” still unknown.

Government censorship not only silences truth but also often counters with propaganda and misinformation to minimize the impact on policy and national image. An example occurred with the 1986 meltdown of the Chernobyl Nuclear Power Plant in then Soviet Ukraine, which threatened all of Europe. To this day, the death toll from the meltdown has yet to be disclosed but has been estimated between 4,000 and 27,000 people.

The New York Times has reported that China had 20,438 confirmed cases of the disease as of early February. During the SARS outbreak, at this time, it had 5,327 cases.

A pandemic risks the lives of thousands.

Conversely, a free press saves lives. Censorship kills, as history has shown us from Chernobyl to coronavirus. Worse, in the absence of journalism, social media spreads misinformation that scientists have difficulty addressing or correcting. That has led to the term “infodemic,” prompting the World Health Organization to work with tech companies to minimize falsehoods about the coronavirus and other diseases.

‘App-ocalypse’ in the Iowa caucus

Did party officials forget about Murphy’s Law?

Ericka Petersen, a supporter of Sen. Bernie Sanders, brought her two children, ages 3 and 1, to the Iowa satellite caucus in Washington, D.C. (Photo by Robin Bravender, States Newsroom.)

Technology changes everything it touches, without itself being changed much at all. Introduce it into the economy, and the economy is all about technology. Introduce it into education, and education is about the technology. Introduce it into elections, and you have the Iowa Democratic Caucus.

The value of the caucus is multi-fold, and many in media fail to appreciate the community and communal aspects of it. You meet with neighbors in your district. You get a card with a front and back ballot. You name your first choice on the front ballot, and if that candidate garners a set minimum of votes to be viable, you’re done. But you also get a second chance if your candidate is declared not viable because too few people supported them. You can vote for another favorite.

You can’t do that in a voting booth.

Thereafter, though, the process becomes complicated. Very complicated.

A phone app was going to make that all so simple. Uh-huh.

Instead, they listened to technology advocates who sell apps by touting Moore’s Law, with speed and capabilities doubling every few years. We know another law, Murphy’s Law: If something can go wrong, it will.Anytime you use technology, you need a Plan B. Anyone who uses technology — from PowerPoint presentations to Skype conferences — has a Plan B. The Iowa Democratic Party didn’t have one.

That’s what happened.

The New York Times was all over this phenomenon, reporting that the app was created by Shadow Inc., a for-profit company. The Times cited Georgetown computer science professor Matt Blaze who stated the obvious: Apps rely on dependable digital networks and operating smartphones to run properly. “The consensus of all experts who have been thinking about this is unequivocal. Internet and mobile voting should not be used at this time in civil elections.”

The app-ocalypse in Iowa might very well bring an end to the state’s caucus and its first-in-the-nation status. Political pundits often criticize the state with its near 3 million residents, and 94% white population, as being non-representative of the nation’s identity. But Iowa’s caucus does offer something of value: It affords candidates a chance to visit with just about everyone of all social classes and interact with us in everyday environments in the year or more leading up to the vote.

All that is in jeopardy, and not only because of the app.

As of this writing, it has been 12 hours since the caucusing ended, and the media are taking prisoners. Here’s a sampling of news stories:

Really?

We’re all living in accelerated digital time. Technology does that. We want what we want when we want it: on demand. We want data on demand. We want to know. Who won, who lost, what’s the meaning of all this? TV talking heads were poised to answer all of that.

One humorous aspect of the no-result Iowa caucus is how irritated media organizations become with their pricey pundits in downsized newsrooms having absolutely nothing to talk about. It was mildly enjoyable seeing CNN’s Wolf Blitzer grow apoplectic as the evening progressed, trying to rally panels of experts to say something, anything, other than “we’re waiting for results.”

Presidential candidates had planes to catch and wanted to flee Iowa even though the weather, at least for Iowans, was a balmy 30+ degrees on caucus night.

In the end, we will know who won the caucus. The results will be accurate because — and this is important, everyone, so please listen up — we’re not talking “hanging chads.” There are voting cards with our names on them and precinct captains have those cards in their possession.

In the meantime, the media circus will have moved on to New Hampshire. The Iowa results will have less of an impact because state party officials relied on technology instead of common sense.

They used an app called Shadow, and it cast a shadow on the future of our caucus.

Technology, Ethics and Kobe Bryant’s Death

Should a news organization have broadcast the basketball star’s death before family were officially informed?

Should Bryant’s sexual assault charge have been mentioned in initial accounts of his passing in a crash that also killed his 13-year-old daughter and seven others?

These are legitimate questions concerning the death of basketball great Kobe Bryant, 41, who perished with his daughter and seven others in a helicopter crash en route to a sports event.

It is standard media practice to wait until officials notify family members of a loved one’s passing. The Atlanta Journal-Constitution reports that this did not happen, publishing a screenshot of the tweet above by a sheriff’s deputy.

Internet, social media and satellite broadcasting have changed standard practice at some but not all news agencies, especially when the death is sudden and concerns a celebrity.

Kobe Bryant was one of basketball’s greatest athletes. The pressure to report was intense. But nonetheless, doing so before family members were informed is ethically suspect.

Scoops were important in the age of legacy media, especially print, when competitors might take hours or even a day or more to match a story. It meant that your outlet had reporters in the field or at the site of spot news. The audience could rely on the outlet’s being first and informing you before others in the spirit of the public’s right to know.

In the digital age, being first to report has a different advantage. It keeps viewers on your channel or website for the inevitable flood of updates and analyses about breaking news.

In this type of environment, news outlets again are dealing with the acceleration of time, an illusion of technology. Everything must be immediate.

It is perfectly ethical for an outlet to wait until authorities notify relatives. That remains the standard. Consider the impact in this case on Bryant’s family, perhaps hearing about their relatives’ deaths from Facebook, Twitter, email, text, video messaging and phone calls.

It must have been harrowing.

It is also true that accelerated time affects what news commentators say about celebrities, even upon first learning about their passing.

CNN’s sports analyst Christine Brennan gave a retrospective five-minute analysis about Bryant soon after his death was reported by TMZ and mainstream media. Within that time frame, Brennan did briefly mention the 2003 sexual assault case, stating: “And, of course, there are issues, while it seems difficult to mention at the moment of his death that we’re talking about the sexual assault allegations, the trial — that was a terrible moment, and that was not good, obviously. I’m not going to sugar-coat that at all.”

Brennan was referencing a charge that Bryant raped a 19-year-old hotel employee. Charges were dropped when the woman did not testify against him. A civil suit was filed and settled out of court.

The CNN reference to the case was made in a report that still fell under the category of spot news. Viewers still learning about his death anticipated a different analysis.

Nevertheless, this wasn’t the first time that the case was mentioned in recent years. Upon Bryant’s 2016 retirement, in the midst of celebrating his sports legacy, The Daily Beast published a full account.

From an ethics perspective, the assault should be mentioned in Bryant’s obituary. It was a major national story.

But again, technology accelerated time. After reporting his death, online news went right into obit mode. In the past there would have been a spot news report about the crash and perhaps the next day, an obituary with the rape case mentioned therein along with other aspects of Bryant’s life.

Mentioning the case while reporting spot news–even before or shortly after his family had heard of his passing–angered some viewers trying to absorb the tragedy that also claimed the life of his daughter, Gianna Maria Onore.

Categories of news have their place, even in the digital era. When spot news combines with obituary in a digital milieu rife with omnipresent commentary by analysts and talking heads, questions are sure to arise.

This will happen again because technology changes everything it touches, including media ethics. It accelerates time. Everything is immediate. Sometimes truth comes off as untimely, at least in the moment.

A New Holiday Tradition: Hide the Router

Lacking cheer in the New Year? There may be a reason, from incessant robo calls to smart phones at the holiday table. This “Iowa View” post in The Des Moines Register advises what you can do about it.

Technology has changed holiday traditions. More of us shop online rather than on Main Street marveling at street decorations and festive window displays. We can avoid the post office, too, with companies like Amazon providing shipping and gift-wrapping. Presents require internet rather than assembly. We can text a digital card with a meme or emoticon rather than sit at the kitchen table composing missives for nieces, nephews, children and grandchildren.

Maybe that’s for the better, since many digital natives have trouble reading cursive.

Despite these conveniences, we seem less cheerful, peeved by pings, clicks and beeps of our era. …

For the rest of the post, visit The Des Moines Register.

Peeping Trolls: We Need New Laws to Stop Invasive Hacking

Many states have “Peeping Tom” laws for anyone who views, photographs or films another person without consent in a place where the victim has a reasonable expectation of privacy. But as of yet, no such law exists for hackers who invade a person’s home through camera security systems.

In recent weeks, hackers have broken into Ring security cameras terrorizing children, spying on women and spewing racist slurs.

 In one such incident, a Peeping Troll not only invaded a person’s home but also called a 15-year-old boy racist names.

Another incident involved a man making inappropriate comments to a woman. The perpetrator then set off the woman’s home alarm system.

Here’s a video of a hacker terrorizing an 8-year-old.

In response, Ring published a post that recommended enabling two-factor authentication and stronger passwords.

In an article about the hacks, titled–“We tested Ring’s Security. It’s Awful“–Motherboard wrote:

Ring hackers’ software works by rapidly checking if an email address and password on the Ring web login portal works; hackers will typically use a list of already compromised combinations from other services. If someone makes too many incorrect requests to login, many online services will stop them temporarily from doing so, mark their IP address as suspicious, or present a captcha to check that the user trying to login is a human rather than an automated program. Ring appears to have minimal protections in place for this though.

Breaking into home cameras is a clear violation of privacy, and if authorities are able to identify a hacker’s IP address, some laws may apply. But now is the time for stiffer penalties. There’s little difference between a Peeping Tom and a Peeping Troll. The latter is perhaps more sinister in that perpetrators do not have to reveal themselves and can invade privacy for as long as luridly as they like and remain silent.

They can stalk, spew hate and terrorize, too.

It’s time prosecutors and lawmakers do something about this new type of home invasion as hackers and their apps become more sophisticated.

Interpersonal Divide in the Age of the Machine has chapters on privacy invasion at home, school and work.

 

When “smart” becomes “snoop”: Your TV is watching you

New TV models recognize you and record what you do in your home–including bedrooms–feeding data to advertisers. Worse, hackers can access your devices, cyberstalk and blackmail you. They can even alert burglars at times you are apt to be away from your home. The FBI just put out a warning.

In 2005, the first edition of Interpersonal Divide stated: “We get the feeling on the other side of our computer that no one is looking back at us through windows, and yet, everybody could be.”

The second edition took that a step further, warning about privacy invasion from voice recognition speakers such as Amazon Echo (aka Alexa). Now, combined with embedded digital cameras, those speakers in televisions have eyes as well as ears.

And they’re stalking you.

So much so, in fact, that the Federal Bureau of Investigation has posted a warning about privacy invasion and cyberstalking. Here’s an excerpt:

Beyond the risk that your TV manufacturer and app developers may be listening and watching you, that television can also be a gateway for hackers to come into your home. A bad cyber actor may not be able to access your locked-down computer directly, but it is possible that your unsecured TV can give him or her an easy way in the backdoor through your router. Hackers can also take control of your unsecured TV. … In a worst-case scenario, they can turn on your bedroom TV’s camera and microphone and silently cyberstalk you.”

Consider that last scenario. Bedrooms. Two thirds of adults have televisions there, and worse, 71 percent of kids age 8-18 do, too. Imagine what a bad actor can do, recording photos, videos and audios of what occurs in our most intimate space of our homes.

The FBI recommends that you take these steps to protect yourself and family members:

  • Know exactly what features your TV has and how to control those features. Do a basic Internet search with your model number and the words “microphone,” “camera,” and “privacy.”
  • Don’t depend on the default security settings. Change passwords if you can – and know how to turn off the microphones, cameras, and collection of personal information if possible. If you can’t turn them off, consider whether you are willing to take the risk of buying that model or using that service.
  • If you can’t turn off a camera but want to, a simple piece of black tape over the camera eye is a back-to-basics option.
  • Check the manufacturer’s ability to update your device with security patches. Can they do this? Have they done it in the past?
  • Check the privacy policy for the TV manufacturer and the streaming services you use. Confirm what data they collect, how they store that data, and what they do with it.

If you are the victim of cyber fraud, you should contact the FBI’s Internet Crime Complaint Center at http://www.IC3.gov or call your local FBI office.

Interpersonal Divide in the Age of the Machine has chapters about privacy invasion at home, school and work, with sections on cyberstalking, harassment and bullying. In addition to televisions, Interpersonal Divide warns about other everyday spying appliances, including dishwashers, refrigerators and even coffee machines.

Social Media’s Power Will Influence Impeachment

As televised hearings begin on the impeachment of President Donald Trump, pundits note the former reality television star knows the power of that medium. He also knows the power of Twitter. Other impeached presidents could control legacy media. In this case, not so much.

Richard Nixon had three networks; Bill Clinton, networks, cable and digital infancy; but Donald Trump has networks, cable, Twitter, Facebook, WhatsAp, WeChat, WordPress, Tumblr, Instagram, Zoon, Skype, LinkedIn and 65+ top global platforms to contend with, all sharing, commenting on, and in many cases, lying about the news.

Short of going into hiding with no digital access, everyone will hear a version of the truth, and not necessarily the factual one, including Sen. Lindsey Graham who says he won’t read transcripts or watch televised proceedings.

Too bad. He’ll get mostly tainted if not fake news.

Impeachment news and views will be everywhere, globally, 24/7. That means citizens who only watch Fox News or MSNBC will be informed by other sources via one platform or another. There is no escaping this.

But here’s the rub: Normally, social media outlets discuss a wide range of topics–the day’s news, celebrity gossip, sports, etc.; but what happens when everyone is focused on the same topic: impeachment.

Now political affiliations come into play, and not in a factual way.

The one in five adults who follow Trump on Twitter will hear upsetting or affirming opinions from others migrating across platforms. Conservatives who support the president and rely on radio or cable news will be fending off liberal Democrats who are most apt to use social media to influence and rally others. Very liberal Democrats and very conservative Republicans will continue to spread their views on Facebook, which migrates to Instagram, so relatives and friends will be arguing through Thanksgiving and into the December and January holidays.

The edge in all these debates will go to viewers who watch the televised proceedings without media affiliation or mediation. Those who already made up their minds about whether the President should be impeached will be getting their information second-, third- and fourth-hand (or worse).

So, if you must rely on news sites, these are the most reliable, according to Forbes magazine:

1. The New York Times

2. The Wall Street Journal

3. The Washington Post

4. BBC

5. The Economist

6. The New Yorker

7. Wire Services: The Associated Press, Reuters, Bloomberg News

8. Foreign Affairs

9. The Atlantic

10. Politico

But you don’t need to subscribe or access their websites. In this momentous case, you can watch the proceedings and make up your own mind without all that social media noise.

In issues of this magnitude, it is important to get the facts and to act on them, not only at the dinner table but especially in the voting booth. Listen, analyze and decide without being tainted by the divisive views of social media and media sites that align with political parties.

For more on the power of social media, and how you can mitigate its effects, read Interpersonal Divide in the Age of the Machine.

Social Media Amplifies Stereotypes

University of Missouri Athletic Department wanted to promote NCAA’s diversity week but sparked dissent at how African Americans were depicted. Think before you tweet, or suffer a similar fate.

The intent of the tweet was proactive, celebrating diversity by promoting aspirations of athletes. It had the opposite effect.

Included in the photo above were track athlete Arielle Mack, depicted with the slogan “I am an African American woman.” Ticket office employee Chad Jones-Hicks appeared above the statement, “I value equality.” The tagline for white gymnast Chelsey Christensen read “I am a future doctor”; the one for swimmer C.J. Kovac, proclaimed, “I am a future corporate financer (sic).”

The misspelling of “financier” indicates lack of fact-checking. Had someone analyzed each word of the post, perhaps disparities could have been avoided.  To be sure, Mack and Jones-Hicks have aspirations on par with Christensen and Kovac, but instead the emphasis there was on race.

Anything on internet can go viral, undermining intent and tainting an organization’s reputation. Clearly, Mizzou Athletics wanted to celebrate diversity and never meant the post to be demeaning.

According to the Washington Post, the tweet was based on a video containing this quote from Mack:  “I am an African American woman, a sister, a daughter, a volunteer and a future physical therapist.” The tagline, of course, should have been “future physical therapist.”

Perhaps one errant tagline could be forgiven; but in this case, there were three.

Sprinter Caulin Graves said, “I am a brother, uncle and best of all, I am a leader [emphasis added].” This is how Graves was depicted:

The Athletic Department apologized for the tweet with another tweet containing a video upon which the errant post was based:

The video, a professional product, has much to commend it. However, the stereotypical tweet undermined that effort.

Vincent Filak, who covered the issue in the Dynamics of Writing websitehad these recommendations:

  • Scrutinize each word of any post to guard against stereotypes.
  • Ask for a second opinion if you unsure that you are disparaging anyone.
  • Run the content by a source included in the content for his or her opinion.
  • Talk to an expert who may have insight or advice on inclusion.

Filak adds, “Even if your newsroom, your PR firm or your ad agency doesn’t have a cornucopia of diversity, you can still avoid dumb mistakes by asking for help.”

Take time with social media posts. Think critically or risk being the target of criticism.

Robotic Hiring Systems and Discrimination

Companies using machine hiring systems might delete potential employees in violation of federal law prohibiting bias based on race, disability, age and other factors. Humans must honor protected classes in interviews while AI vendors protect proprietary algorithms.

In the above video, Wall Street Journal senior correspondent Jason Bellini covers the pros and cons of robotic hiring systems. He interviews Kevin Parker, CEO of HireVue, who says his platform is more objective than traditional interviews because it removes bias from the hiring process. However, Bellini also interviews Ifeoma Ajunwa, legal scholar and labor law professor at Cornell University who challenges that view.

First, some legal background:

Employers who interview job applicants must adhere to tenets of Title VII of the Civil Rights Act of 1964, which forbids discrimination based on national origin, age, disabilities and other factors. Questions must be free of bias. For instance, an interviewer may not inquire about a candidate’s height, weight and marital status.

No doubt AI programmers have taken Title VII into account when phrasing interview questions, such as found in this tip sheet by the University of New England. But that’s not where algorithmic discrimination might occur.

That bias might be subtle, programmed into an algorithm adapted to the hiring company’s idea of an “ideal” job candidate. People might be excluded without anyone knowing if the robot is measuring facial features for age, weight, symmetry, voice tone or other distinguishing human feature. There is no real way of knowing without examining the proprietary program.

Dr. Ajunwa addresses this concern in an NPR interview:

So that’s where it gets more complicated – right? – because a job applicant could suspect that the reason they were refused a job was based on characteristics such as race or gender, and this is certainly prohibited by law. But the problem is how to prove this. So the law requires that you prove either intent to discriminate or you show a pattern of discrimination. Automated hiring platforms actually make it much harder to do either of those.

And a lot of times, the algorithms that are part of the hiring system, they are considered proprietary, meaning that they’re a trade secret. So you may not actually be able to be privy to exactly how the algorithms were programmed and also to exactly what attributes were considered. So that actually makes it quite difficult for a job applicant.

Benetech, a nonprofit whose mission is “to empower communities with software for social good” is concerned about AI hiring systems discriminating against people with disabilities. The company discusses key findings of a 2018 study titled “Expanding Employment Success for People with Disabilities“:

  • Artificial intelligence tools are increasingly widespread and vendors of these products have little understanding of their negative impact on the employment of people with disabilities.
  • The level of data collection about all of the relevant issues remains rudimentary, limiting many opportunities for improvements.
  • It is clear that employers see people with disabilities primarily through a compliance lens, and not through a business opportunity frame.

As AI hiring systems become more popular with such companies as Goldman Sachs, Unilever and Vodafone, attorneys and legislators are investigating ways to ensure algorithms are compliant with federal law.

Illinois is among the first in the nation to take on robotic hiring programs in its “Artificial Intelligence Video Interview Act,” which requires transparency and consent for any company using these algorithms.

In a post about the new law, Bloomberg Law states:

Employers increasingly are using AI-powered platforms such as Gecko, Mya, AutoView, and HireVue to streamline their recruitment processes, control costs, and recruit the best workers. Providers claim their technologies analyze facial expressions, gestures, and word choice to evaluate qualities such as honesty, reliability, and professionalism.

But the technology is also controversial. Privacy advocates contend AI interview systems may inject algorithmic bias into recruitment processes, and that AI systems could generate unfounded conclusions about applicants based on race, ethnicity, gender, disability, and other factors.

Interpersonal Divide in the Age of the Machine contains chapters that address the inherent biases of algorithmic programming. Institutional racism, subliminally associated with an organization’s target audience or bottom line, may be encoded into sophisticated robotic systems.

For instance, the Washington Post reports that a popular algorithm that identifies patients who need extra medical care “dramatically underestimates the health needs of the sickest black patients, amplifying long-standing racial disparities in medicine.”

When it comes to robotic HR systems, that’s the beginning of what awaits those the algorithm selects for employment. If technology is used to select a person for a job, one can anticipate that it will be used to monitor performance on that job.

Here’s an excerpt from Interpersonal Divide:

Machines not only monitor how employees are using devices and applications but also may be programmed to detect moods and behaviors of those employees. Machines monitor employees to an alarming degree in some companies, often under the pretext of improving performance. Stress is measured, too, although usually in a negative light. Examples include tracking a worker’s Internet and social media use; tapping their phones, emails and texts; measuring keystroke speed and accuracy; deploying video surveillance; and embedding chips in company badges to evaluate whereabouts, posture and voice tone.

Cyberlaw needs to catch up with federal labor law, especially when AI is used in hiring and firing decisions. As Bloomberg Law notes in its report, some labor law attorneys believe algorithmic systems could unintentionally screen out protected classes. One attorney cited in the above post suggests employers should test robotic systems against a pool of candidates for potential bias.