Published by the Students of Johns Hopkins since 1896
November 26, 2024

Hatice Gunes describes research on emotionally intelligent AI at LCSR seminar

By CATHERINE POLLARD | April 18, 2023

43267970922_94ebb5b7e6

DEEPAK PAL / CC BY-SA 2.0

Hatice Gunes delivered a seminar on the emotional potential of artificial intelligence (AI) technology.

The Laboratory for Computational Sensing and Robotics (LCSR) hosted Hatice Gunes, a professor in the University of Cambridge’s department of Computer Science and Technology, on April 12. Her seminar was titled “Emotional Intelligence for Human-Embodied AI Interaction” and covered Gunes’ work at the intersection of psychology, computer science and robotics.

While Gunes touched upon the details of her algorithms and assumed some machine learning background, her talk primarily focused on her applications of and novel approaches to emotionally intelligent artificial intelligence (AI). 

She described the concept of continual and individualized learning, where the robots used in her lab’s studies are taught to adapt their actions based on the explicit and implicit feedback of their human users. Within the span of a single interaction, the human registers higher levels of the robot’s responsiveness, care and listening behavior. 

“We can see an indication of a clear preference with the participants toward adaptive coaching that is sensitive to their affective expressions,“ Gunes said. “Conditions two and three (conditions with personalization) are rated better than condition one (a scripted interaction) across all evaluations.” 

Gunes focused on two main areas of application. 

First, according to Gunes, humanoid service robots, are usually utilized and valued because they are new. However, she sees the value of one key application of humanoid robots: filling staffing shortages. Her lab investigated the differences between human and robotic servers and care-givers. Though human customers often distrust AI services, Gunes maintained that the use of adaptive, emotionally intelligent AI minimizes that distrust.

The second area Gunes focused on was the use of AI for social well-being. Reward functions and reinforcement learning are integral to the development of adaptive AI, according to one of Gunes’ studies. Gunes’ researchers turned this concept on its head: For several human participants, their algorithm optimized the difficulty of a video game in order to improve attentiveness and mood while playing. 

Gunes found that seeking a state in which the player was scoring imperfectly but displayed positive emotions cultivated a feeling of competence and well-being. 

“We relied on the theory of flow, which says that, when one is actively engaged where their skills are balanced to the challenge of the activity, they approach an optimal state,” she said. “The context of this is... that cognitive training is considered as a preventative intervention, particularly for people who are prone to dementia.” 

Gunes also spoke on the rise of generative AI, a type of AI that can be used to create content in its many forms. Her lab studies the topic through a facial-expression prediction and interpretation program. By comparing markers of the predicted and actual facial expressions in response to a scripted conversation, Gunes’ lab is attempting to simulate cognitive processes that are specific to the person. Furthermore, their program can learn from the subject’s facial expressions over time to predict their personality. 

According to Gunes, as humanoid robots become more common, the algorithm can be used to create robotic facial expressions for better engagement in encounters with humans. Additionally, the algorithm may also be used to recognize signs of depression in the facial expressions of patients. 

Regarding the ethics of emotionally generative AI, Gunes acknowledged several challenges. For example, although her programs have emotion-recognition capability, they are not inherently emotional but are essentially data processors. Assigning responsibility to an AI for decisions has also been fraught with ethical complications. 

In response to these concerns, Gunes clarified that her emotionally responsive AI are not designed for use in psychiatry or therapy, but, rather, to promote general well-being in its users. 

Jeremy Brown, a researcher in the LCSR, was interested in the applications of Gunes’ research to his own on touch-based feedback, a developing topic in the field of robotics. He asked a question during the seminar.

“How have — or can — other modalities for affective communication, such as touch, be integrated into your framework?” he said.

Gunes highlighted the extreme importance of touch in emotional interactions, especially in the case of interactions with children. She recalled her observation of children’s interaction with robots in a study

“Kids really want to always touch and press buttons and so on. In fact, we now ask them to press buttons on the robot during [well-being] evaluations,“ she said. “The way kids interact is so different from the way adults interact.”

Looking to the future of her lab, Gunes hopes to continue improving human-AI interactions by using continual learning to understand why humans act a certain way and make AI responses more individualized. Her lab is also attempting to define standards for how an AI should respond. 

“These models need to be sensitive to individual attributes — beyond gender, race, age, down to the individual — and adapt new instances of already-learned concepts,” she said.


Have a tip or story idea?
Let us know!

News-Letter Magazine
Multimedia
Hoptoberfest 2024
Leisure Interactive Food Map