E T H E R E A L

Empathetic AI for augmented care in human wellbeing

The use of conversational agents, such as chatbots and virtual personal assistants, has become increasingly prevalent with the advances in digital technology. While these tools offer many benefits, the design of their personalities has often reinforced societal stereotypes. Research suggests that a more human-centered approach is necessary, and that conversational agents should be designed to be less human-like and more contextually aware. Additionally, there is a need for more research on how consumers perceive and interact with anthropomorphized AI and how these technologies can be designed to promote positive social behavior and integrate with the environment.

The Ethereal project seeks to address these challenges by exploring how AI can regulate human-like traits to cultivate empathy and foster trustworthy social interactions. It focuses on designing ambient assisted living environments where AI agents and their personas interact with the human actor, using emergent interfaces to create pervasive and emotionally responsive experiences.

The goal of this project is to transcend human limitations and enhance societal wellbeing by integrating AI-driven interactions that augment care delivery. To this end, this project explores experiments with AI assistive technology that leverages multi-sensory communication. Central to this vision is the ability of machines to comprehend human emotions and respond with emotional resonance. This capability can be realized through multimodal interaction and emotional contagion, enabling AI agents to engage in deeper, more empathetic connections with humans.

The project aims at establishing multidisciplinary collaborations. Researchers and specialists from the several fields of  interaction design, digital arts, computer science, motion design / visual special effects, theatre and performance, psychology and neuroscience are welcome to join. Aditionally, it encourages master's and doctoral students to contribute through their theses, particularly when these align with the aesthetic approach of Ethereal.

The project is also open to partnerships with other research institutions that are interested in advancing knowledge in Human-AI Interaction, Affective Computing,Conversational AI Agents and Ambient Assisted Living.


Context
The rise of digital technology has led to the development of conversational agents such as chatbots and intelligent personal assistants, which have proven to be highly beneficial in healthcare scenarios, digital therapy, and ambient assisted living for care delivery. Assistive technology increases confidence and sense of security, and helps to reduce social isolation in a society that is seeking human longevity and care (Rodolfo, Correia, Duarte, et al., 2016). Design plays an important role in providing strategies for the implementation of cross-channel
user experiences that allow human-centered care (Rodolfo et al., 2014).
 
Empathetic AI is an emerging field of research that seeks to replicate human empathy through computational models (Alazraki et al., 2021; Liu-Thompkins et al., 2022; Montemayor et al., 2022; Montiel-Vázquez et al., 2022; Nallur & Finlay, 2022; Zhu & Luo, 2023). However, research indicates that designing the AI personality or persona, a common practice in the development of conversational agents (Lawton, 2019), has been reinforcing stereotypes in our society (Pradhan & Lazar, 2021). Conversational agents, such as chatbots and intelligent
personal assistants, have become social actors in human daily life, raising questions about trust, relationships, and anthropomorphism (Seymour & Van Kleek, 2021). Several studies identify the need to create AI agents that are progressively more empathetic and human-like, not only in terms of their physical appearance but also in the way they mimic emotions and personality traits (Liu-Thompkins et al., 2022; Pradhan & Lazar, 2021).
However, there is a research gap in understanding how consumers might perceive themselves as congruent with anthropomorphized AI and how it can be integrated into our social behavior (Alabed et al., 2022). To address this issue, prior research has emphasized the importance of studying the user mental models and mapping the way people think into the workflows of systems (Rodolfo et al., 2014). Furthermore, it is crucial to engage innovators before early users when implementing new platform services and practices for human behavior change at a large scale (Laranjo et al., 2017). This can be achieved by aligning the goals of stakeholders who deploy new technology with desirable futures for society (Rodolfo et al., 2016).
 
The lack of research on user self-perception of AI raises ethical concerns on human-machine interaction. Studies are needed to integrate AI in daily life while promoting trust without necessarily requiring to assume all traits of human form. Prior research proposes human-centered design to address artificial empathy in human-machine interfaces, using storytelling to elicit empathy and promote collaborative imagination. (Haas et al., 2022; Mathur et al., 2021; Spitale et al., 2022; Zhao, 2013). Furthermore, the next generation of voice assistants may always be listening to the environment, proactively providing services and recommendations (Tabassum et al., 2019).
QUESTIONS
Q1 - How can AI be designed, presented, and integrated into our daily lives as an empathetic, entity that assists the human actor without adopting all traits of human form?
Q2 - How can human-AI interactive dialogue be embedded in our environment as an ubiquitous and context-aware presence, that augments human care through sensorial communication?
Q3 - How do humans perceive the integration of AI behavior in their lives while understanding it as an entity that influences their self-perception?
RESEARCH PLAN
UNDERSTANDING HUMAN EMPATHY FACTOR IN HUMAN-AI INTERACTION
 
The project combines an approach of human-centered design, UX/UI methods, and a strong emphasis on empathy as a central human factor to create innovative AI interaction experiences. The research plan includes the following research lines and methods:
 
Conduct literature reviews on AI conversational and virtual agents, focusing on empathy as a central factor to design human-AI interactions. Explore how empathy can be embedded in AI systems and the challenges it presents for human-centered design.

Study emergent human-machine interfaces (HMI) such as light, sound, speech, and motion graphics. Examine how these modalities interact in multimodal AI systems and their potential for enhancing empathy and user engagement through sensorial comunication.

Perform ethnographic studies both online and offline, gathering data on current AI-related materials. Analyze human-AI interaction across different contexts to understand how people perceive and engage with AI agents.

Conduct empathy tests and sentiment analysis on current AI agents or experimental models to assess their ability to evoke emotional responses. This will inform the design of AI systems that are more responsive to human emotions.

Organize focus groups with specialists from diverse fields, including digital art, theatre and neuroscience, to gain insights into empathy and emotional engagement in AI systems. Use this interdisciplinary input to shape the design of human-AI interactions.

Conduct user studies to explore human self-perception during interactions with AI in different settings, such as healthcare, co-working, or customer service environments. This will provide valuable insights into how people perceive AI in emotionally sensitive contexts.
 
DESIGN AI PERSONAS AND ORQUESTRATE INNOVATIVE EXPERIENCES

Develop personas and insights through co-design workshops, using data gathered in earlier stages. Use conceptual thinking to create affinity diagrams and touchpoint inventories that map the orchestration of multimodal AI interfaces, integrating light, sound, speech, and motion.

Design and develop a multimodal AI system that orchestrates light, sound, speech, and motion graphics. Focus on creating a coherent and empathetic user experience that addresses human comfort and emotional needs in care delivery scenarios.

Explore voice modulation and storytelling techniques to create more engaging and emotionally resonant AI interactions. Design conversational workflows based on personas developed in earlier stages, ensuring that the AI system is adaptable to different user needs and contexts.

Set up physical installations incorporating sensing devices to capture biofeedback responses during human-AI interactions. The installation will simulate environments like waiting rooms or therapy spaces, using motion design, controllable lights, and biofeedback to adapt to user emotional states.

Match motion design graphics with voice interfaces in the AI system, ensuring an emotionally responsive experience. The goal is to create a system that can recognize and respond to emotional cues through both visual and auditory channels through
 
TEST AND REFINE A FRAMEWORK AND GUIDELINES ON THE TOPIC

Conduct user testing such as A/B testing to compare user experiences in interacting with AI through stand-alone voice interfaces versus multimodal interaction.

Develop and refine a framework for designing empathetic, context-aware AI agents that are seamlessly embedded in human environments. This framework will focus on delivering a multimedia experience that combines sound, visual effects, and light, supported by speech, to enhance human-AI interaction.

Iteratively test and refine deriving prototypes, applying design cycles and user feedback to improve the empathy of the AI system, adaptability, and multimodal interaction.
 
Present the work in conferences and collect insights to improve the solutions.
EXPECTED OUTCOMES
Expected outcomes aim to contribute to the fields of Human-AI interaction, Empathetic AI, Ethics AI, and Affective Computing with an impact on Ambient Assisted Living, Digital Health and Digital Therapy, by publishing in international conferences and journals.
Moreover they include a framework for designing empathetic AI personas to guide the design community in creating trustworthy AI agents for human-interaction. This framework aims to address the ethical and societal impacts of integrating AI into daily life, enhancing care quality delivery by supporting human wellbeing.
References
Alabed, A., et al. (2022). AI anthropomorphism and its effect on users' self-congruence and self–AI integration: A theoretical framework and research agenda. Technological Forecasting and Social Change.
Alazraki, L., et al. (2021). An empathetic AI coach for self-attachment therapy. CogMI, IEEE.
Haas, G., et al. (2022). Keep it Short: A Comparison of Voice Assistants’ Response Behavior. Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems.
Laranjo, et al. (2017). Characteristics of innovators adopting a national personal health record in Portugal: cross-sectional study. JMIR Medical Informatics.
Lawton, G. (2019). Google Empathy Lab uses 'design feeling' in search of a more human UI. techtarget.com Liu-Thompkins, et al. (2022). Artificial empathy in marketing interactions: Bridging the human-AI gap in affective and social customer experience. Journal of the Academy of Marketing Science.
Mathur, L., et al. (2021). Modeling user empathy elicited by a robot storyteller. 9th International Conference on Affective Computing and Intelligent Interaction.
Montemayor, C., et al. (2022). In principle obstacles for empathic AI: why we can’t replace human empathy in healthcare. AI & society.
Montiel-Vázquez, E. C., et al. (2022). An Explainable Artificial Intelligence Approach for Detecting Empathy in Textual Communication. Applied Sciences.
Nallur, V., & Finlay, G. (2022). Empathetic AI for ethics-in-the-small. AI & society.
Pradhan, A., & Lazar, A. (2021). Hey Google, do you have a personality? Designing personality and personas for conversational agents. Proceedings of the 3rd Conference on Conversational User Interfaces.
Rodolfo, I., et al. (2016). Perspectives on user experience for a nation-wide senior telehealth program. Proceedings of the 30th International BCS Human Computer Interaction Conference.
Rodolfo, I., et al. (2016). How far in the future will we start from? Interacting with the stakeholders of a nationwide patient portal. Proceedings of the 2016 CHI Conference.
Rodolfo, I., et al. (2014). Design strategy for a national integrated personal health record Proceedings of the 8th Nordic Conference on Human-Computer Interaction.
Rodolfo, I., et al (2014.). The importance of mental models in the design of integrated PHRs. AMIA.
Seymour, W., & Van Kleek, M. (2021). Exploring interactions between trust, anthropomorphism, and relationship development in voice assistants. Proceedings of the ACM on Human-Computer Interaction.
Spitale, M., et al (2022). Socially Assistive Robots as Storytellers that Elicit Empathy. JHRI.
Tabassum, M., et al. (2019). Investigating users' preferences and expectations for always-listening voice assistants. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies.
Zhao, H. (2013). Emotion in interactive storytelling. FDG.
Zhu, Q., & Luo, J. (2023). Toward Artificial Empathy for Human-Centered Design: A Framework. arXiv preprint.
Agenda 2030
ODS 9: Industry, Innovation, Infrastructure
ODS 12: Sustainable Consumption and Production
 
The proposed project on empathetic AI supports the United Nations' Research Agenda 2030's goals 9 and 12, which are focused on industry, innovation, infrastructure, and responsible consumption and production. The project aims to develop a framework for creating empathetic AI personas that enhance a healthy connection between humans and AI, leading to innovative products and services that improve quality of life (goal 9). It also examines ethical and societal concerns related to integrating empathetic AI personas into daily life to promote responsible and sustainable practices in AI development and use to promote wellbeing, while minimizing negative impacts on the society. (goal 12). Overall, the project can contribute to achieving the objectives of goals 9 and 12 of the Research Agenda 2030.

ETHEREAL PROJECT

Sixth Edition of Stimulus to Sicentific Employement from FCT.

Submission date: 08.05.2023 / Approval date: 28.11.2023 / Final results: 24.05.2024 / Begining date: 08.09.2024

2022_FCT_Logo_A_horizontal_brancoIR

Interested in collaborating?

LOGOTIPO_NOVA-FCT_En_vert_RGB_neg
Logo-Novalink-Simples-Branco

©2024 IR CEEC-IND

EMPATHETIC AI FOR AUGMENTED CARE IN HUMAN WELLBEINGEMPATHETIC AI FOR AUGMENTED CARE IN HUMAN WELLBEINGEMPATHETIC AI FOR AUGMENTED CARE IN HUMAN WELLBEING