Would you like to make your virtual avatar more humane? Or would you like to talk to a robot that is much more expressive and humane in its expression? In this project, we aim to address such challenges by designing an AI-powered interactive model prototype.
How to generate the emotion of your choice/need?
In this project, we leverage the power of generative AI to build a multimodal emotion synthesizer that can diffuse the incomplete uni-mode emotion style patterns to generate an aligned Multi-modal Emotion encoder representing the face/body language/audio within a unified space. For example, a robot that demonstrates empathy in its audio output should also demonstrate an appropriately aligned bodily expression. In another scenario, a virtual agent demonstrating excitement in its response should also highlight a similar facial expression. However, given the complexity of human emotion and its evolution due to various intra- or inter-personal and surrounding contexts, which may further vary based on the users’ socio-cultural-ethnicity background, model generalizability is a challenge in itself. In this project, we aim to address these issues to build a culturally and emotionally sensitive interactive prototype that can adapt itself in a wide range of user environment settings.
Demo prototype and publications.
Length of commitment | Longer than a semester; 6-9 months |
Start time | Anytime |
In-person, remote, or hybrid? | Hybrid Project (can be remote and/or in-person; to be determined by mentor and student) |
Level of collaboration | Small group project (2-3 students) |
Benefits | Academic credit, Stipend |
Who is eligible | All undergraduate students |
Sreyasee Das Bhattacharjee
Assistant Professor of Research & Teaching
Computer Science and Engineering
Phone: (980) 267-1610
Email: sreyasee@buffalo.edu
Once you begin the digital badge series, you will have access to all the necessary activities and instructions. Your mentor has indicated they would like you to also complete the specific preparation activities below. Please reference this when you get to Step 2 of the Preparation Phase.
Multimodal Emotion Analysis, Generative AI, Machine Learning, Large Multimodal Model