Artificial Intelligence Biased Against People with Mental Illness

May 29, 2022

Scott Monteith, MDThe increasing use of a technology called emotion artificial intelligence could lead to discrimination against people with mental illnesses, particularly in employment, a report led by a College of Human professor warned.

Businesses and government agencies are rapidly expanding the use of emotion artificial intelligence (AI), including in screening job applicants and monitoring workers, with little evidence that it is scientifically valid, the article published in the journal Current Psychiatry Reports said.

“There are concerns that commercial use of emotion AI will increase stigma and discrimination and have negative consequences in daily life for people with mental illness,” the report said, adding that “commercial emotion AI algorithm predictions about mental illness should not be treated as medical fact.”

The algorithms measure facial features, body language, speech patterns, heart rate, respiration, eye tracking, sweating, and other data, including social media postings, to develop a conclusion about a person’s emotional state. The programs also could discriminate based on race and gender, since researchers suspect the algorithms are largely based on characteristics of white males.

“We know that AI has embedded in it the same biases that humans have,” said Scott Monteith, MD, a psychiatrist, clinical assistant professor of psychiatry, and the study’s lead author. “When it comes to mental illness, sadly there’s a tremendous amount of stigma.”

The AI algorithms, including those developed by major software companies, are proprietary and, therefore, unavailable for independent researchers to judge the scientific validity, Monteith said.

The worldwide market for emotion AI is expected to increase from $19.9 billion in 2020 to $52.8 billion by 2026. Advertising, marketing, retail, education, police, employment, and insurance are among the 18 business sectors already using emotion AI, according to a 2018 United Nations report.

Job applicants and employees often are unaware that AI is monitoring them and making judgements about their emotional state.

“That’s one of the challenging elements of this,” said Monteith, who is based in Traverse City. “Sometimes AI is being used, and you may not be aware it is being used. It is insidious, and it is increasingly ubiquitous.”

For people with mental illnesses, that is particularly troublesome, he said, because people with mental illnesses, other disabilities, or facial disfigurements might not be analyzed accurately or fairly, the article said.

Some AI programs scan resumes for keywords and use algorithms to analyze text, raising the possibility that some job applicants will be rejected without human involvement. In analyzing facial images, the programs typically look for indicators of six emotional states: anger, disgust, fear, happiness, sadness, and surprise.

A primary goal of his article, Monteith said, was to raise awareness of how common emotion AI is becoming and that it is not based on sound scientific evidence.

The technology, he suggested, should be regulated through a combination of government standards and voluntary “best practices guidelines.”

“I am very pro technology,” he said. “Like any tool, AI is potentially helpful and potentially harmful. AI should be vetted to protect everyone involved.”