Future applications of national importance, such as healthcare, critical infrastructure, transportation systems, and smart cities, are expected to increasingly rely on machine-learning methods, including structured learning, supervised learning, and reinforcement learning. In many of these applications, the probabilistic distribution governing the data may undergo variations with time and location, and data could be corrupted by faulty or malicious agents/sensors. Such model deviation and data corruption could result in significant performance degradation. The goal in this project is to explore new ways to design learning and inference methods that are robust to distributional uncertainty and data corruption. This project is bridging and further advancing research in areas of statistical learning, optimization, control theory, network science, reinforcement learning, statistical signal processing and information theory. The methods developed are likely to have significant impact on a wide range of applications in areas of societal importance such as healthcare, transportation systems, smart grids, and smart cities. The investigators are co-organizing special sessions at conferences, workshops and symposia on robust learning and inference to disseminate the research outcomes of this project, formalize far-reaching research directions, identify new challenges in this emerging area, stimulate the development of original research ideas, and foster interdisciplinary collaborations. The investigators are committed to broadening participation of under-represented minorities and women both among the graduate and undergraduate students in computing and engineering. The investigators are enriching their current courses and further developing new courses on topics related to this project.
This project is expected to make new contributions to the theory and practice of robust learning and inference. Several emerging directions are being investigated, including robust sketch-based learning, robust mean estimation, synthesis of confusing inputs to machine-learning models, robustness to distributional uncertainty at inference time, and robust model-free reinforcement learning.