Analysis Seminar
Jakob Streipel, SUNY at Buffalo
Second moments and zero-density estimates
4:00 PM, 250 Mathematics building
The Riemann Hypothesis for an L-function says that its nontrivial zeros lie on a certain line. If this were true, it simplifies lots of applications of these L-functions simply because a certain part of the calculation now becomes fixed instead of potentially variable. We don’t knowhow to prove the Riemann Hypothesis for any of the L-functions we study, but sometimes for certain applications or computations the second best thing suffices: knowing that, in some quantifiable way, at most very few of the zeros aren’t on the line where they ought to be. That way, even though there might be outliers, they are rare enough to not ruin the overall picture too much. Such a result is known as a zero-density estimate. This talk will be about zero-density estimates in general, how one might find them, and how second moments of L-functions are a natural way to do so. In doing so we will discuss recent work, joint with Sheng-Chi Liu, in which we compute a zero-density estimate for the L-functions associated with GL(2)Hecke–Maass cusp forms.
Applied Mathematics Seminar
Yulong Lu (U Minnesota)
In-Context Learning in Scientific Computing
4:00 PM, MATH 250
Transformer-based foundation models, pre-trained on large datasets spanning a wide range of tasks, have shown remarkable adaptability to diverse downstream applications—even in low-data regimes. A particularly striking capability is in-context learning (ICL) : when given a prompt containing a few examples from a new task alongside a query, these models can produce accurate predictions without any parameter updates. This emergent behavior is often viewed as a paradigm shift for transformers, yet its theoretical foundations remain only partially understood. In this talk, I will present recent theoretical progress toward understanding ICL in scientific computing . I will focus on understanding how transformer architectures can implicitly perform task adaptation in three representative problem classes: learning solution operators of PDEs, dynamical system prediction and generative modeling.
Topology and Geometry Seminar
Roberta Shapiro (University of Michigan)
Geometry, topology, and combinatorics of fine curve graphs
4:00 PM, 122 Mathematics Building
The fine curve graph of a surface is a graph that encodes information about curves on a surface and their interaction. This is similar to the more classical curve graph, which encodes information about the isotopy classes of curves on a surface. In this talk, we construct both graphs and compare and contrast some properties, such as the groups that act on them, their geometry, their topology, and their combinatorics. Some shared results will be work joint with Ryan Dickmann, Zachary Himes, and Alex Nolte.
Analysis Seminar
Rizwanur Khan, University of Texas at Dallas
TBA
3:30 PM, 122 Mathematics building
TBA
Applied Mathematics Seminar
Lili Ju (U of South Carolina)
Transferable Neural Networks for Partial Differential Equations
4:00 PM, MATH250
Transfer learning for partial differential equations (PDEs) aims to develop a pre-trained neural network that can be used to solve a wide class of PDEs. Existing transfer learning approaches require much information about the target PDEs such as its formulation and/or data of its solution for pre-training. In this work, we propose to design transferable neural feature spaces for the shallow neural networks from purely function approximation perspectives without using PDE information. The construction of the feature space involves the re-parameterization of the hidden neurons and uses auxiliary functions to tune the resulting feature space. Theoretical analysis shows the high quality of the produced feature space, i.e., uniformly distributed neurons. We use the proposed feature space as the predetermined feature space of a random feature model and use existing least squares solvers to obtain the weights of the output layer. Extensive numerical experiments verify the outstanding performance of our method, including significantly improved transferability, e.g., using the same feature space for various PDEs with different domains and boundary conditions, and the superior accuracy, e.g., several orders of magnitude smaller mean squared error than the state-of-the-art methods. Finally, we discuss ongoing and future research topics along this direction.