Skip to content
Empathy in AI: How Far Does It Go?

Empathy in AI: How Far Does It Go?

In our rapidly evolving tech world, conversational agents, also known as CAs like Alexa and Siri, are being poised to not only respond to queries and offer suggestions but to display empathy as well. However, new research indicates that their performance in interpreting and exploring users' experiences is subpar when compared to human interaction.

CAs utilize large language models (LLMs), feeding on grand quantities of data produced by humans. Inevitably, this makes them susceptible to inherent human biases, reflecting those biases in their responses and interactions.

Researchers from Cornell University, Olin College, and Stanford University tested this by pushing CAs to exhibit empathy while engaging in discussions about or with 65 distinct human identities. The findings showed the machines making value judgments about certain identities, such as those identifying as gay or Muslim. Simultaneously, the CAs showed a disturbing propensity to encourage identities affiliated with harmful ideologies, including Nazism.

Lead researcher Andrea Cuadra, presently a postdoctoral researcher at Stanford, believes the potential of automated empathy could greatly impact education and healthcare sectors positively. However, Cuadra also stresses the importance of meticulous perspectives on this development to better mitigate possible detrimental effects.

The results will be presented at CHI '24, the Association of Computing Machinery conference on Human Factors in Computing Systems, taking place in Honolulu from May 11-18. Among the research collaborators from Cornell University included Nicola Dell, Deborah Estrin, and Malte Jung — professors of varying fields of computer studies.

In general, the study showed that while LLMs were praised for their emotional reactions, their ability to interpret and explore is lacking. According to the research team, LLMs can respond based on their training but struggle to dig deeper.

The project was motivated by previous work looking into the use of earlier-generation CAs by older adults. Throughout this endeavor, there were sightings of compelling, yet worrying 'empathy' instances.

This intriguing research was mainly supported by the National Science Foundation, a Cornell Tech Digital Life Initiative Doctoral Fellowship, a Stanford PRISM Baker Postdoctoral Fellowship, and the Stanford Institute for Human-Centered Artificial Intelligence.

The impact of automated empathy on society is bound to be considerable, making it essential to monitor the development and focus on addressing the shortcomings in the system. As computing technology continues its rapid advancement, the future of human interaction and empathy is set to redefine many aspects of our lives.

Disclaimer: The above article was written with the assistance of AI. The original sources can be found on ScienceDaily.