Generalized Multimodal Emotion Synthesis for Interactive Agents

Happy face and smiling faces.

Would you like to make your virtual avatar more humane? Or would you like to talk to a robot that is much more expressive and humane in its expression? In this project, we aim to address such challenges by designing an AI-powered interactive model prototype. 

Project description

How to generate the emotion of your choice/need?

In this project, we leverage the power of generative AI to build a multimodal emotion synthesizer that can diffuse the incomplete uni-mode emotion style patterns to generate an aligned Multi-modal Emotion encoder representing the face/body language/audio within a unified space. For example, a robot that demonstrates empathy in its audio output should also demonstrate an appropriately aligned bodily expression. In another scenario, a virtual agent demonstrating excitement in its response should also highlight a similar facial expression. However, given the complexity of human emotion and its evolution due to various intra- or inter-personal and surrounding contexts, which may further vary based on the users’ socio-cultural-ethnicity background, model generalizability is a challenge in itself. In this project, we aim to address these issues to build a culturally and emotionally sensitive interactive prototype that can adapt itself in a wide range of user environment settings. 

Project outcome

Demo prototype and publications.

Project details

Timing, eligibility and other details
Length of commitment Longer than a semester; 6-9 months 
Start time Anytime 
In-person, remote, or hybrid? Hybrid Project (can be remote and/or in-person; to be determined by mentor and student) 
Level of collaboration Small group project (2-3 students) 
Benefits Academic credit, Stipend 
Who is eligible All undergraduate students 

Core partners

  • Prof. Junsong Yuan from CSE Dept and the faculties from Jacob's School of Medicine 

Project mentor

Sreyasee Das Bhattacharjee

Assistant Professor of Research & Teaching

Computer Science and Engineering

Phone: (980) 267-1610

Email: sreyasee@buffalo.edu

Start the project

  1. Email the project mentor using the contact information above to express your interest and get approval to work on the project. (Here are helpful tips on how to contact a project mentor.)
  2. After you receive approval from the mentor to start this project, click the button to start the digital badge. (Learn more about ELN's digital badge options.) 

Preparation activities

Once you begin the digital badge series, you will have access to all the necessary activities and instructions. Your mentor has indicated they would like you to also complete the specific preparation activities below. Please reference this when you get to Step 2 of the Preparation Phase. 

  • Reading seminal articles or books
  • Other activities 

Keywords

Multimodal Emotion Analysis, Generative AI, Machine Learning, Large Multimodal Model