Robust Multimodal Data Analytics in an Interactive Environment

Artificial Intelligence.

In this research we plan to design interactive AI models for different usecase applications like identifying bias in web-based contents, evaluating speakers’ mental health status in a multi-speaker video, predicting users’ interest for effective contextual advertising etc.

Project description

The availability of large quantities of labeled data has enabled deep learning methods to achieve impressive breakthroughs in several tasks related to artificial intelligence, such as speech recognition, object recognition, evaluating and analyzing the social media contents, etc. However, these supervised models require large amounts of labeled data and many iterations of learning (i.e., high learning costs) to train their large numbers of parameters. This severely hinders the scalability of these models, making them fundamentally difficult to adapt automatically to continuously evolving data patterns or to learn new, emerging, or rare data categories, due to high annotation costs, limited labeled samples, and computationally expensive learning algorithms. The classification challenge increases further when a system must learn discriminative data characteristics by exploring only a limited number of annotated samples. As such, an important research direction is to extend existing models to adopt with the changing data pattern in an interactive learning setting, which uses sparse expert intervention to regulate its learning process and attempts to capture complementary and heterogeneous multi-modal data information (image, texts accompanying the post, other meta-data like user details, Geo locations, etc.), wherever appropriate. Toward building an interactive AI model, one possible research approach is to introduce multimodal chatbots, where users and the conversational agent communicate by natural language, text, and visual data.​

Project outcome

  • Build a prototype
  • Publish results in top conferences/journals

Project details

Timing, eligibility and other details
Length of commitment Individual student project
Start time Fall, Spring, Winter, Summer
In-person, remote, or hybrid?
Hybrid
Level of collaboration Individual student project
Benefits Academic Credit (open to discussion)
Who is eligible

Have background in at least one of the courses in Al concentration and good programming skill in Python.

Juniors/Seniors

Project mentor

Sreyasee Das Bhattacharjee

Assistant Professor of Research & Training

Computer Science & Engineering

Phone: (716) 645-4769

Email: sreyasee@buffalo.edu

Start the project

  1. Email the project mentor using the contact information above to express your interest and get approval to work on the project. (Here are helpful tips on how to contact a project mentor.)
  2. After you receive approval from the mentor to start this project, click the button to start the digital badge. (Learn more about ELN's digital badge options.) 

Preparation activities

Once you begin the digital badge series, you will have access to all the necessary activities and instructions. Your mentor has indicated they would like you to also complete the specific preparation activities below. Please reference this when you get to Step 2 of the Preparation Phase. 

Discuss and identify the problem, literature review, investigate the problems/challenges, identify a couple of recent works to build the baseline prototype, submit report

Based on this, the project will then progress to follow the next steps as:

  • Build the baseline prototype 
  • investigate the challenges faced by the baseline model
  • Discuss on how to improve
  • Improve the prototype for improvement
  • Student will be required to submit weekly progress report before weekly meeting

Keywords

Computer Science & Engineering