AI, what’s there to worry about? Everything!

What’s all the hullabaloo about AI? Plenty! In a 2004 Will Smith film, you remember  the actor who slapped Chris Rock at the 2022 Oscars,  played the lead detective in I, Robot. 

I, Robot is science fiction movie where the robots have gone offline and have been re-configured to change from obedient caretakers and subservient docile machines to narcissistic masters who want to rule the earth. They had a flaw in their programing. 

Conspiracy thinking isn’t new.

The plot is an interesting one with Will Smith not swallowing the Kool Aid attitude that robots won’t harm humans. He is labeled a conspiracy theorist because of his beliefs and the robots attempt to kill him. 

The film is action packed and during some of the scenes the robots who have not been rewired to kill humans, take on the role of actuaries. During different scenes they offer Will Smith a calculation or  likelihood of an event, like how likely will the heroine survive an attack. He rejects their calculations because the heroine’s life is at stake and he is not dissuaded by the calculation because he has empathy for the woman and a personal interest in her survival. Confused? Don’t be. 

Human actuaries, like the robots in I, Robot, have been calculating when we’re going to die since life insurance companies have been selling their insurance. As an aside,  insurance companies started selling life insurance (formerly known as death insurance until they had a public relations awakening)  in the mid-1870s and it became popular in the United States in the 1970s when baby boomers were getting into the labor market on a larger scale. 

In the pre-IBM computer days, actuaries used their calculators, which took them a while and they really didn’t have a good database to draw from in making their guesses as to when we might die. Then the calculators became electrified and evolved into the  computer. 

With a computer the actuary could enter more data points.  Then the computers got quicker. Within the speed of a nano second (for the geek amongst us, a nano is one billionth of a second), the computers are able to calculate the impact of our age, gender, race and when they can expect us to die! Soon, if we let them, the insurance companies will include our genealogical heritage to put a finer point on our life span. 

Still contemplating sending in your DNA to an ancestry site? If you do, please consider keeping your data anonymous. 

Another film some of you may be familiar with is 2001:A Space Odyssey. Directed by Stanley Kubrick, who won the Academy Award for his special effects,  2001 tells the story of a space mission to Jupiter.

2001: A Space Odyssey 

On the flight to Jupiter, with some of the astronauts in suspended animation and others who are monitoring the mission, the computer, HAL 9000, attempts to sabotage the trip by killing off the astronauts due to a programing error made by the humans when they designed the machine. Spoiler alert, HAL doesn’t succeed. 

What does all of this have to do with AI? Should we be concerned about where AI is headed? Did  Hollywood get it right or are they waving the conspiracy flag prematurely? What’s your thinking?

First, what is meant by AI? As humans we have a human perspective…duh! But we are fallible.  As humans we may attempt to solve our problems in a rational, unbiased, and linear or straight forward way. But in actuality, we struggle with our conscious and unconscious bias. Its like walking around with blinders believing that we have X ray vision.  

Although we may believe we are being rational but because we don’t have full control over our unconscious, we are limited in knowing how or if our conclusion would be any different if we had a different life experience. 

That’s why when looking at the same set of circumstances or events, two otherwise similarly experienced persons will come up with different conclusions. Even identical twins, who have ninety eight percent similar DNA will differ. But by the way, that’s a good thing. Our life experiences makes us who we are and influences our decisions for the better and the worse.

What does that have to do with AI? Well, AI in its simplest form is a machine, a computer, being given input from multiple sources (all humanly generated so the human is always part of the equation).

Where did AI get its name?

IBM (not the most independent of sources) defines AI, in part, as a combination of  computer science and data to generate answers to a specific problem. After a few trial runs and getting re-programed, the machines’ program can develop what’s known as “machine or deep learning,” which is defined as artificial intelligence. We know the machine is not intelligent, but the process is analogous to human intelligence so that’s how it gets its name.

The human brain is the most complex of our organs. Our experiences both benefit and limit our output. The computer is a wonderful machine, but it too is limited and enhanced by the data points it’s been given (you’ve probably heard the expression, garbage in garbage out), which means if we feed a machine false or partial data, its problem solving, like our own brain’s conclusions can result in the wrong answer. However, machines can surpass humans because machines can handle so much data so quickly, and when they are designed to put together complex systems from many data points, they can come up with answers quickly. Accuracy, however, just like with humans, may be wanting. But what machines can do easily, with the right programing, is to form AI algorithms. An algorithm is a combination of routine steps. Humans have algorithms just like machines can be programed to have an algorithm. An example of a human algorithm in life is tying your shoes. A machine algorithm is when given a playlist of one hundred songs to come up with all those having the same genre or style.  

Let me share my own prejudice here. I don’t object to machine algorithms and machines learning, but in the absence of monitoring by humans, the machine is subject to error, and then we as humans suffer the consequences.

A Forbes Article found that AI algorithms, written by humans, but then given free rein in making financial loan decisions had “an inherent bias … (and a) contributing factor in slowing minorities home loan approvals. An investigation by The Markup found  lenders were more likely to deny home loans to people of color than to white people with similar financial characteristics. Specifically, 80% of Black applicants were more likely to be rejected, along with 40% of Latino applicants, and 70% of Native American applicants were likely to be denied.”

What is a good resource for AI?

Possible Minds: Twenty Five Ways of Looking at AI edited by John Brockman is an excellent resource if you want more information. The text addresses financial, psychological, moral and educational issues. It doesn’t offer a way around the dilemma and the time has passed before we can put the genie back in the bottle. 

The internet is a breeding ground for entertainment, information and unfortunately misinformation. Doing a search for information will provide you with links for your every question. But the links don’t come with any type of credibility or authenticity rating. 

Can you believe a five-star ranking?

Restaurants, experts, and consumer products are given rankings, but buyer beware, you cannot take a five-star ranking as reliable. I believe we will be inundated in the near future with a host of fakes, both visual look a likes, articles and statistics that will be taken at face value but later proven to be false. Fakes and scams have been around forever. The use of AI however has made getting the truth more challenging. When you decide to believe something you read or see, become a researcher. Go on your search anonymously. Enter the data into different search engines and cross check your data with two or more sources. 

AI has moved into the world of work and hiring of new employees, Rani Molla a senior correspondent at Vox wrote in a recent blog “The use of AI is already commonplace in so-called applicant tracking software, which most major companies use in their hiring process. This widespread technology allows companies to use keywords or criteria — like whether or not they have a college degree or a gap in their resume — to automatically winnow down the mass of incoming online applications. But many, including employers themselves, fear that those broad strokes could end up excluding people who would be perfectly good candidates.”

A Pew survey quoted by Molla found that the use of AI for monitoring employees’  movements and facial expressions, and tracking them while they work has become more common in the workplace, but the research doesn’t show the technology works all too well and has had the negative side effect of worker demoralization and lower productivity.

Have a story about AI you’d like to share? Please send it my way.

Thanks for reading the column. Please go to the AI website (psychology@americanisraelite.com) and post a comment. 

Questions? Suggestions? Send me an email at psychology@americanisraelite.com. Be well. Stay safe. See you here next month. 

For more from Dr. Manges’ Psychologically Speaking, click here.