Students are required to take either the GRE Math Subject Test or the Major Field Test in Mathematics (MFT) the last semester before they graduate. If students have taken the GRE subject test, they do not need to take the MFT but should provide the university with evidence of having taken the subject test and the score received.

The MFT is an ETS (Educational Testing Service) standardized assessment test of undergraduate mathematics. Interested students may go to the ETS Major Field Test website at http://www.ets.org/mft/about/content/mathematics for a test description and sample problems.

The MFT is a multiple-choice exam consisting of two parts, each one hour long and timed separately. Between the two parts of the exam, you may leave to get a drink or use the bathroom and may resume the second part of the exam when you return.

For those graduating in April, the next MFT will be given on Saturday, April 8 at 10:00 AM in 149 TMCB. If you are interested in taking the exam, or if you require additional information, please contact Lonette Stoddard at 801-422-2062 or lonettes@byu.edu.

This test does not appear on your transcript or affect your GPA. A passing score is not required.

**Title:** Counting elliptic curves with a 7-isogeny

**Abstract:** In this talk, we review the literature on arithmetic statistics for elliptic curves over Q. We present new asymptotics for the number of elliptic curves height up to X which admit a 7-isogeny, and discussion directions for future work. This research is joint with John Voight.

**Title:** Universality in models of random growth

**Abstract:** Perhaps the greatest achievement of classical probability is the central limit theorem: under very mild assumptions, all properly normalized sums of independent and identically distributed random variables converge to the normal distribution. In the present time, a major focus of modern probability is to understand universal behavior of spatial stochastic models. In the last 30 years, it has been shown that special classes of random matrices, queuing systems, spatial growth models, and stochastic PDEs exhibit universal limiting statistics described by what is now called the Tracy-Widom distribution. In the last six years, richer objects whose one-point marginal distributions are given by the Tracy-Widom distribution have been constructed and shown to be the full scaling limit of several specialized models. Numerical and physical evidence suggests that this convergence should hold on a much broader scale, but proving such outside of the specialized models remains as a major open problem. In this talk, I will give a gentle introduction to this topic and describe my work to give new perspectives on this problem through another universal object known as the stationary horizon. Time permitting, I will discuss how the stationary horizon answers interesting questions about the fractal geometry of random growth models. Based on joint work with Ofer Busani and Timo Seppäläinen

**Title:** Agent-based modeling and topological data analysis of fish patterns

**Abstract:** Many natural and social phenomena involve individuals coming together to create group dynamics, whether the agents are drivers in a traffic jam, cells in a tissue, or locusts in a swarm. Here I will focus on the example of skin pattern formation in zebrafish. Zebrafish are named for their dark and light stripes, but mutant fish feature variable skin patterns, including spots and labyrinth curves. All of these patterns form as the fish grow due to the interactions of tens of thousands of pigment cells in the skin. This leads to the question: how do cell interactions change to create mutant patterns? To help address this question, I develop agent-based models to describe cell behavior in growing 2D domains. However, my models are stochastic and have many parameters, and comparing simulated patterns and fish images is often a qualitative process. In this talk, I will overview our models, discuss how methods from topological data analysis can be used to quantitatively describe cell-based patterns, and share ongoing research connecting different modeling approaches.

**Title:** An approach to computing Gromov-Witten invariants

**Abstract:** In the 1990s, theoretical physics gave rise to a new mathematical challenge: computing certain “virtual counts” of curves on manifolds. These counts, called the Gromov-Witten invariants of the manifold, model particle interactions in string theory. In principle one can determine the small-scale geometry of the universe by matching some numbers obtained in a physics lab to Gromov-Witten invariants computed in a math department (in practice, we are still waiting on the numbers from the lab). The challenge of computing Gromov-Witten invariants has motivated over two decades of mathematics, and there are still important open questions. I will discuss some explicit formulas for Gromov-Witten invariants that are available when the manifold is described by sufficiently “linear” data (representations of reductive groups), and the implications of these formulas for said open questions. These formulas also have many applications to both classical geometry and geometry motivated by physics.

**Title:** Quadratic magic: Diophantine equations with squares

**Abstract:** We survey some classical and recent results on Diophantine equations involving perfect squares. We also discuss some open problems. The talk will be accessible to a broad, non-specialist audience.

**Title:** Uniqueness of the measure of maximal entropy for the standard map

**Abstract:** Understanding the dynamics of a nonlinear system can be a very hard task, even for systems having a simple expression. A good example of such a system is the (Taylor-Chirikov) standard map. Sinai conjectured that the standard map has positive metric entropy for large parameters (i.e., it has a set of positive Lebesgue measure having non-zero Lyapunov exponents). The dynamics of the standard map is far from being well understood. In this talk, I will discuss some progress in the understanding of the dynamics of the standard map.

Title: Learning physics-based reduced-order models from data: Operator inference for parametric partial differential equations

Abstract: Large-scale numerical simulations of complex physical systems form the backbone of many modern scientific applications. For decades, mathematicians and computational scientists have focused on solving the forward problem of mapping initial/boundary conditions, system parameters, and auxiliary inputs to the corresponding solution of a known dynamical system. Next-generation scientific tasks such as physics-constrained optimization, optimal experimental design, and uncertainty quantification require many forward simulations (sometimes thousands or millions), each with different scenario parameters. Unfortunately, forward problems are often computationally intensive due to spatial and temporal resolution demands. Model order reduction seeks to alleviate the computational burden of forward solves by replacing expensive numerical simulations of complex physical systems with inexpensive surrogate models, called reduced-order models.

Classical model order reduction techniques construct reduced-order models by directly compressing the discretized governing equations, but this approach is infeasible for production-level codes where the discretization details are highly complex, proprietary, or classified. This talk presents Operator Inference, a data-driven model order reduction framework for constructing reduced-order models using only (i) knowledge of the structure of the governing equations and (ii) available simulation data. We detail a method for ensuring stability in the reduced-order model through a regularization selection procedure and use Bayesian inference to quantify the uncertainties associated with the data-driven learning. We also show how, for a large class of parametric systems, parametric dependencies can be embedded directly into the reduced-order model. In this setting, well-posedness conditions for the learning problem lead to a parameter selection criteria. The methodology is demonstrated on a variety of applications, including a single-injector combustion process and the FitzHugh-Nagumo neuron model.

**Title:** Taking Chances

**Abstract:** Many of the day-to-day decisions we make require us to weigh risks. Navigating the uncertainty in our lives is fraught with difficulty, as our intuition and our experience will, more often than not, lead us astray. Should I check my bag or carry it on? Should I finish the last leg of my road trip or find a hotel and drive home in the morning? Should I pay for the extended warranty? We will explore these questions and discuss other everyday topics in probability.

**Title:** Doing More with Less: Graph-based Methods for Learning from Limited Observations

**Abstract:** Modern research in machine learning has primarily focused on the supervised learning of functions from massive amounts of labeled data, where inputs with their observed outputs (labels) are available to the learning algorithm. However, in applications, it is often more realistic to have plenty of unlabeled data (i.e., inputs without known labels) while only few labeled data. With a common theme of leveraging the geometric structure of data through similarity graphs, I will present my recent work on the theoretical understanding and computational application of semi-supervised and active learning paradigms for learning when labeled data are scarce but unlabeled data are plentiful. I will discuss my work to prove Bayesian posterior contraction in graph-based semi-supervised regression and how it inspired the subsequent design of a computationally efficient graph-based active learning method. I will also present a novel uncertainty sampling criterion for active learning in a graph-based model that has a well-defined continuum limit partial differential equation formulation; this continuum limit model facilitates the establishment of rigorous mathematical guarantees about the sampling complexity of the proposed method. Experimental results will demonstrate the utility of the methods to various applications like pixel classification in hyperspectral imagery and automatic target recognition in synthetic aperture radar imagery.