Skip to main content

PADS Seminar

This is the homepage of the Probability, Analysis, and Data Science (PADS) seminar.  In the spring semester, the seminar takes place on Wednesdays at 3:10pm in Carver 204.




Speaker Title and Abstract
2/12/20 Justin Peters

Title: Representations of Operator Algebras

Abstract: We will present some examples of operator algebras, and review classical representation theory of C*-algebras. Recent results for nonself-adjoint operator algebras will be mentioned.  The talk will focus on motivation for current research.  Little background will be assumed.

2/19/20 Eric Weber

Title: The Universal Approximation Theorem

Abstract:  The Universal Approximation Theorem is a result from machine learning: it says that any continuous function on a compact set can be uniformly approximated by a neural network.  Even though the result is motivated by data science, its proof is entirely functional analytic.  We will go through the proof, and discuss some interesting open questions.

3/11/20 Evan Camrud  
3/25/20 Hung Nguyen (UCLA)  
4/1/20 Nate Harding  
4/15/20 David Renfrew (SUNY Binghamton)  
4/22/20 Cecilia Mondaini (Drexel)  
4/29/20 Ilwoo Cho (St. Ambrose U.)  


In the fall semester, the seminar takes place Wednesdays at 2:10pm in Carver 202.

Fall 2019 Speaker Title and Abstract
09/04/19   Organizational meeting
09/11/19 Krishna Athreya (ISU)

Title: One Uniform [0,1] RV will do.

Abstract. Suppose X is a uniform [0,1] rv. Then there exist a sequence of Borel measurable of functions f sub i on [0,1] such that f sub i of X are iid uniform [0,1] RV.  So the amount of randomness in one uniform RV is the same as that of the infinite dimensional unit cube.

09/18/19 Pelin Guvin Geredeli (ISU)

Title: Wellposedness results of a linearized flow-plate interaction PDE model under different boundary conditions

Abstract: In this paper we consider a flow of compressible fluid, and study perturbations about this given flow state. Such perturbations will be induced by a coupling to a non-stationary elastic dynamics imbedded in the fluid’s boundary. We restrict our attention to such flow-structure interactions where the fluid exists in a 3-D spatial domain, bounded by a 2-D Lipschitz domain. A flat portion of the boundary is the equilibrium state of an elastic plate—with dynamics dictated by a fourth order plate equation that neglects the effects of rotational inertia. The flow and structure are strongly coupled at the fluid-structure interface, with the plate dynamics affecting the flow through a normal component boundary condition, and the flow dynamics providing the dynamic distributed stresses across the surface of the plate. Taking the flow dynamics to be given by a linearization of compressible Navier-Stokes about a non-zero flow state will turn out to have important repercussions in the interior (flow) dynamics, as well as at the flow-structure interface. We track the effects of this term at the flow-structure interface, (i) yielding a velocity matching condition, and (ii) a boundary condition  involving the material derivative of the structure. Adapting a Lumer-Phillips approach, we prove the wellposedness of those models.

09/25/19 Sara Nayer (ISU, ECE)

Title: Phaseless PCA: Low-Rank Matrix Recovery from Column-wise Phaseless Measurements

Abstract: We study the following problem: recover a low-rank matrix from phaseless (magnitude-only) linear projections of each of its columns. In analogy with Robust PCA, we refer to this problem as ``Phaseless PCA''. It finds important applications in phaseless dynamic imaging, e.g., Fourier ptychographic imaging of live biological specimens. This work introduces the first provably correct solution, Alternating Minimization for Low-Rank Phase Retrieval (AltMinLowRaP), for this problem.  Our guarantee for AltMinLowRaP shows that it can recover an $n \times q$ matrix of rank $r$ to $\epsilon$ accuracy using only about  $nr^4 \log(1/\epsilon)$ measurements, as long as the matrix condition number is bounded by a numerical constant. This sample complexity is only $r^3$ times worse than the order-optimal value of $\max(n,q) r$.  The only other assumption needed for our result is incoherence of the right singular vectors of the matrix. We demonstrate the practical advantage of  AltMinLowRaP over existing work via extensive simulation, and some real-data, experiments. We also briefly study the dynamic extension of the above problem.

10/02/19 Evan Camrud (ISU)

Title: Fractional operators via analytic interpolation of integer powers

Abstract: Although the study of functional calculus has already established necessary and sufficient conditions for operators to be fractionalized, this talk aims to use our well-conceived notion of integer powers of operators to construct non-integer powers of operators. In doing so, we not only provide a more intuitive understanding of fractional theories, but also provide a framework for producing new fractional theories. Such interpolations allow one to approximate fractional powers by finite sums of integer powers of operators, and thus may find much use in numerical analysis of fractional PDEs and the time-frequency analysis of the fractional Fourier transform. Further, these results provide an interpolation procedure for the Riemann zeta function.

10/09/9 Fritz Keinert (ISU)  
10/16/19 Chinmay Hegde (ISU, ECE)

Title: Towards a Theoretical Understanding of Inverse Problems with Neural Priors

Abstract: Inverse problems of various flavors span all of science, engineering, and design. Over the last five years, approaches based on neural networks have emerged as the tool of choice for solving such problems. However, a clear theoretical understanding of how well such approaches perform -- together with quantitative sample-complexity and running-time bounds -- remains elusive.

We will first discuss a natural algorithmic approach for solving inverse problems using pre-trained neural generative models (such as GANs). This approach will lead to upper (and in some cases, tight) bounds for inverse imaging problems such as compressive sensing and phase retrieval.

The main drawback of GAN models is the requirement of massive amounts of training data up front. To alleviate this, we will introduce a new family of generative model priors that can naturally interpolate between the data-rich and data-poor regimes. We discuss their utility, along with theoretical guarantees, for solving certain families of partial differential equations.

10/23/19 Mike Catanzaro (ISU) Title: Stochastic Dynamics of Cellular Cycles

Abstract: In this talk, we will explore stochastic motion of cellular cycles inside CW complexes. This serves as a generalization of random walks on graphs, and a discretization of stochastic flows on smooth manifolds. We will define a notion of stochastic current, connect it to classical electric current, and show it satisfies a quantization result. Along the way, we will define the main combinatorial objects of study, namely spanning trees and spanning co-trees in higher dimensions. We will relate these to stochastic current, as well as discrete Hodge theory.
10/30/19 Jennifer Newman (ISU)

Title:  Score-based likelihood ratios and the camera source problem

Abstract: The use of statistics to provide a sound basis to evaluate forensic evidence is one of CSAFE’s goals. In this talk I provide a problem statement for the camera source problem and describe the use of score-based likelihood ratios as a method to determine the chance that an image was acquired by a specific camera using the photo response non uniformity noise residue residing in an image. This is work in progress by my PhD student Stephanie Reinders. 

11/06/19   INFAS in Des Moines, Saturday, November 2
11/13/19 Enrico Au-Yeung (DePaul)

Title: Phase transition for eigenvalues and recovery of rank one matrices

Abstract: In datasets where the number of parameters is fixed and the number of samples is large, principal component analysis (PCA) is a powerful dimension reduction tool. However, in many con- temporary datasets, when the number of parameters is comparable to the sample size, PCA can be misleading. A closely related problem is the following: is it possible to recover a rank-one matrix in the presence of a large amount of noise? In both situations, there is a phase transition in the eigen-structure of the matrix.  If time permits, I will talk about some related phenomenon or applications.

12/04/19 Zhengdao Wang (ISU, ECE)

Title: Equivariance, convolution, and neural networks

ABSTRACT:  Deep convolutional neural networks have seen great success in solving some of the speech and vision problems in recent year. In this talk, we will discuss the motivation for using convolution in neural networks, and present a few results on the design of the convolutional and generalized convolutional neural networks. In particular, we will discuss the connection with group equivariance, design of network when multiple equivariances are required, and some networks that can be designed to process graphical data.


Aditya Ramamoorthy


Title: Leveraging Coding for Distributed Computing


Large scale computing clusters that process huge amounts of data are ubiquitous in both industry and
academia. The widespread usage of these clusters presents several advantages over traditional computing paradigms. However, they also present newer challenges where coding-theoretic methods are poised to make a major impact. For example, such clusters are well known to suffer from the problem of “stragglers” (slow or failed nodes in the system). A qualitatively different problem arises in distributed computing frameworks such as MapReduce which intertwine node computation and network communication. For several classes of jobs, the network communication time required for the shuffle-phase of MapReduce can be a substantial part of the overall job execution time.

This talk will consist of two parts. In the first part I will provide a brief overview of my group's work on using innovative code constructions for addressing both these issues. Both problems require examining coding theoretic ideas from different angles. In the second part, I will discuss in detail our recent work on straggler mitigation for distributed matrix computations. Matrix computations are a key component of various algorithms within machine learning and scientific computing. Prior work in this area has largely treated stragglers as equivalent to erasures in coding theory. However, in the distributed computation context, one often has slow (rather than failed) workers and efficient techniques for exploiting partial stragglers are needed. Another equally important issue is that matrix computations occur over the reals which introduce crucial considerations such as the numerical precision of the recovered result. For example, Reed-Solomon codes (which enjoy the best node failure tolerance threshold) face severe numerical stability issues in this context, owing to the high condition number of the associated Vandermonde matrices that need to be inverted for decoding.  I will discuss our recent work on using universally decodable matrices (UDMs) and random convolutional codes for these problems. UDMs are a systematic way to leverage partial computations by the stragglers. Random convolutional codes allow for fast decoding while enjoying an optimal tolerance threshold. In both cases we obtain numerically robust solutions.