This page is not longer being maintained. Please visit the new UMD Mathematics website at www-math.umd.edu.
DEPARTMENT OF MATHEMATICS
Math Home > Research > Seminars > Numerical Analysis Seminar > [ Search | Contact | Help! ]

(January 26) Prof. Alastair Spence: The calculation of the distance to instability by computing a Jordan block — A new algorithm is presented for the computation of the distance of a stable matrix to the unstable matrices. The method is based on finding a two-dimensional Jordan block in a Hamiltonian eigenvalue problem first introduced by Byers in 1988. Numerical results will be presented.

(February 2) Prof. Alice de Jesus Kozakevicius: Adaptive multiresolution WENO schemes for Kinematic Flow Problems — Multi-species kinematic flow models lead to strongly coupled, nonlinear systems of first-order, spatially one-dimensional conservation laws. The number of unknowns (the concentrations of the species) may be arbitrarily high. Models of this class include a multi-species generalization of the Lighthill-Whitham-Richards traffic model and a model for the sedimentation of polydisperse suspensions . Their solutions typically involve kinematic shocks separating areas of constancy, and should be approximated by high resolution schemes. In this work a fifth-order weighted essentially non-oscillatory (WENO) scheme is combined with a multiresolution technique that adaptively generates a sparse point representation (SPR) of the evolving numerical solution. The adaptivity is obtained through a hard thresholding of the wavelet coefficients from the wavelet representation of the solution at each time step.

Thus, computational effort is concentrated on zones of strong variation near shocks. Numerical examples from the traffic flow models demonstrate the effectiveness of the resulting WENO multiresolution (WENO-MRS) scheme.

(February 16) Abner Salgado-Gonzalez: Fractional Time-Stepping Techniques for Incompressible Flows — Projection schemes have been used in CFD for about four decades, starting with the pioneering works of Chorin and Temam. They are one of the most popular methods to solve the unsteady Navier-Stokes equations. The main idea of these methods is to decouple the diffusion from the incompressibility constraint by decomposing vector fields into a divergence-free part and a gradient.

One of the main bottlenecks in these techniques is the determination of the pressure. For large size problems this becomes the dominant part of the computational cost. In this talk, I will briefly review some well-known results concerning the so-called pressure correction schemes for incompressible flows, and show that these schemes can be also interpreted as a penalization of the divergence on certain negative norms. This point of view is somewhat orthogonal to the current mainstream in the literature which mainly focuses on the projection point of view. I will discuss how with the aid of this different approach we are currently developing new methods that reduce the computational complexity of enforcing the incompressibility constraint.

The departure from the projection paradigm also proves very useful when solving variable density flows. As a second part of my talk, I will discuss a novel fractional time-stepping technique for incompressible flows with variable density. In this new method the determination of the pressure equation involves, as opposed to all other existing techniques, only the solution of a Poisson equation, which greatly reduces the computational cost.

(February 23) Dr. Chensong Zhang: Multilevel Eulerian-Lagrangian Methods for Complex Fluids — The link between various viscoelastic fluid models and the symmetric matrix Riccati differential equations can be a new device that brings suitably unified means for the numerical treatments of viscoelastic models. In this talk, we describe a few steps toward efficient numerical schemes for complex fluids simulation. First, we construct stable finite element discretizations using Eulerian--Lagrangian methods based on the Riccati formulations of the viscoelastic models. Then we develop a new multilevel time-marching scheme; together with adaptive time-stepping schemes and time parallel schemes, we can build efficient methods for complex fluids simulation. Furthermore, we discuss robust and efficient multilevel solvers for Stokes-type systems arising at each time step in the Eulerian--Lagrangian discretization. Finally, we show some preliminary numerical results.

(March 4) Prof. Ronald DeVore: Solving a Parametric Family of Elliptic PDEs — Parametric PDEs arise in their own right in design problems but also as a means of solving stochastic PDEs. The main objective in this field is to avoid solving the PDE for each choice of parameters. There are several strategies for doing so. We shall discuss two of these. The first is the so-called reduced basis method which can be formulated as a greedy algorithm in a Hilbert space. We shall discuss what is known about convergence rates for this algorithm. The second class of methods try to exploit the analytic dependence of the solution on the parameters. We shall discuss what we know about this analyticity and what it implies about sparse recovery of the solutions to a parametric family of elliptic problems.

(March 5) Prof. Ronald DeVore: A Taste of Compressed Sensing — Compressed Sensing is a new paradigm in signal and image processing. It seeks to faithfully capture a signal/image with the fewest number of measurements. Rather than model a signal as a bandlimited function or an image as a pixel array, it models both of these as a sparse vector in some representation system. This model fits well real world signals and images. For example, images are well approximated by a sparse wavelet decomposition. Given this model, how should we design a sensor to capture the signal with the fewest number of measurements. We shall introduce ways of measuring the effectiveness of compressed sensing algorithms and then show which of these are optimal.

(March 9) Dr. Harbir Antil: Optimization and Model Reduction of Time Dependent PDE-Constrained Optimization Problems: Applications to Microfluidic Biochips — The optimal design of structures and systems described by partial differential equations (PDEs) often gives rise to large-scale optimization problems, in particular if the underlying system of PDEs represents a multiscale, multiphysics problem. Therefore, reduced order modeling techniques such as balanced truncation model reduction (BTMR), proper orthogonal decomposition (POD), or reduced basis methods (RB) are used to significantly decrease the computational complexity while maintaining the desired accuracy of the approximation. We are interested in such shape optimization problems where the design issue is restricted to a relatively small portion of the computational domain and in optimal control problems where the nonlinearity is local in nature. In these cases, it appears to be natural to rely on a full order model only in that specific part of the domain and to use a reduced order model elsewhere. A convenient methodology to realize this idea is a suitable combination of domain decomposition techniques and BTMR. We will consider such an approach for optimal control and shape optimization problems governed by advection-diffusion equations and derive explicit error bounds for the modeling error. As an application in life sciences, we will be concerned with the optimal design of capillary barriers as part of a network of microchannels and reservoirs on surface acoustic wave driven microfluidic biochips. Here, the state equations represent a multiscale multiphysics problem consisting of the linearized equations of piezoelectricity and the compressible Navier-Stokes equations. The multiscale character is due to the occurrence of fluid flow on different time scales. A standard homogenization approach by means of a state parameter results in a first-order time periodic linearized compressible Navier-Stokes equations and a second-order compressible Stokes system. The second-order compressible Stokes system provides an appropriate model for the optimal design of the capillary barriers. This talk is based on joint work with M. Heinkenschloss (Rice University), R.H.W. Hoppe (University of Houston), and D.C. Sorensen (Rice University).

(March 23) Prof. Michael Overton: Characterization and Construction of the Nearest Defective Matrix via Coalescence of Pseudospectral Components — Let $A$ be an $n\times n$ matrix with $n$ distinct eigenvalues. Its open $\epsilon$-pseudospectrum is the set of points in the complex plane which are eigenvalues of a matrix $B$ with $||A-B|| < \epsilon$. For sufficiently small $\epsilon$ the pseudospectrum consists of $n$ disjoint connected components (one around each eigenvalue), and as $\epsilon$ is increased these components coalesce with one another. Let $c(A)$ be the supremum of all $\epsilon$ for which the pseudospectrum has $n$ components.

Let $w(A)$ be the distance from $A$ to the set of defective matrices, using either the 2-norm or the Frobenius norm. Demmel and Wilkinson independently observed in the early 1980s that $w(A)\geq c(A)$, and equality was established for the 2-norm by Alam and Bora in 2005. We give new results on the geometry of the pseudospectrum near points where coalescence of the components occurs, characterizing such points as the lowest generalized saddle point of the smallest singular value of $A-zI$ over $z\in \C$. One consequence is that $w(A) = c(A)$ for the Frobenius norm too, and another is the perhaps surprising result that the minimal distance is attained by a defective matrix in all cases. Our results suggest a new computational approach to approximating the nearest defective matrix.

Coauthors: R. Alam, S. Bora, R. Byers

(March 30) Prof. Coralia Cartis: Adaptive regularization methods for nonlinear optimization — Nonlinear nonconvex optimization problems represent the bed-rock of numerous real-life applications, such as data assimilation for weather prediction, radiation therapy treatment planning, optimal design of energy transmission networks, and many more. Finding (local) solutions of these problems usually involves iteratively constructing easier-to-solve local models of the function to be optimized, with the optimizer of the model taken as an estimate of the sought-after solution. Linear or quadratic models are usually employed locally in this context, yielding the well-known steepest descent and Newton approaches, respectively. However, these approximations are often unsatisfactory either because they are unbounded in the presence of nonconvexity and hence cannot be meaningfully optimized, or they are accurate representations of the function only in a small neighbourhood, yielding only small or no iterative improvements. Hence such models require some form of regularization to improve algorithm performance and avoid failure; traditionally, linesearch and trust-region techniques have been employed for this purpose and represent the state-of-the-art. Here, we first show that the steepest descent and Newton's methods, even with the above-mentioned regularization strategies, may both require a similar number of iterations and function evaluations to reach within approximate first-order optimality, implying that the known bound for worst-case complexity of steepest descent is tight and that Newton's method may be as slow as steepest descent under standard assumptions. Then, a new class of methods will be presented that approximately globally minimize a quadratic model of the objective regularized by a cubic term, extending to practical large-scale problems earlier regularization approaches by Nesterov (2007) and Griewank (1982). Preliminary numerical experiments show our methods to perform better than a trust-region implementation, while our convergence results show them to be at least as reliable as the latter approach. Furthermore, their worst-case complexity is provably better than that of steepest descent, Newton's and trust-region methods.

This is joint work with Nick Gould (Rutherford Appleton Laboratory, UK) and Philippe Toint (University of Namur, Belgium).

(April 6) Prof. Georgios Akrivis: Linearly implicit schemes for for a class of dispersive--dissipative systems — We consider initial value problems for semilinear parabolic equations possessing a dispersive term, nonlocal in general. This dispersive term is not necessarily dominated by the dissipative term. In our numerical schemes, the time discretization is done by linearly implicit schemes. More specifically, we discretize the initial value problem by the implicit--explicit Euler scheme and by the two--step implicit--explicit BDF scheme. We also derive optimal order error estimates. We provide various physically relevant applications of dispersive--dissipative equations and systems fitting in our abstract framework.

(April 13) Dr. Irene Kyza: A posteriori error estimates for approximations of Schroedinger-type and semilinear parabolic equations — We derive a posteriori error estimates for time discrete approximations for Schroedinger-type and semilinear parabolic equations. In the case of linear Schroedinger equations fully discrete approximations will be studied as well. The estimates are of optimal order and aim to provide information for the behavior of the method even in cases where we do not have information on the exact solution of the p.d.e. In the nonlinear cases we will mainly consider problems with possible blow-up in finite time. In these cases we derive conditional estimates (estimates which are valid under conditions of a posteriori type). We further discuss how these estimates can lead to error control near the blow-up time.

(April 20) Dr. Benjamin Stamm: Reduced Basis Method for the parametrized Electrical Field Integral Equation (EFIE) — The subsequent discretization of the EFIE is a common approach to solve scattering problems on unbounded domains which is known as the Boundary Element Method (BEM) or Method of Moments (Mom). In many applications, such as optimization, shape recognition or inverse problems, just to mention a few, solving the Boundary Element Method for each new parameter value is too expensive (and unnecessary).

The Reduced Basis Method is accurate, efficient and trustable algorithm in the framework of parametrized problems and in a many-query context. We will present how the Reduced Basis Method is applied to parametrized scattering problems. The novelty is that for the first time the Reduced Basis Method is applied to an integral equation. We will discuss the challenges and present numerical examples.

(April 27) Panayot S. Vassilevski: Algebraic Multigrid (AMG) and Upscaling Error Estimates — Multigrid (or MG) is becoming the method of choice for solving large sparse systems of algebraic equations that typically arise from discretized partial differential equations (or PDEs) due to its potential for optimal order cost. We describe some necessary and sufficient conditions for having an optimal MG iteration method. Next, we focus on the algebraic versions of MG (or AMG). This refers to the case when the hierarchy of (vector) spaces needed to build a MG is not available, hence it has to be constructed by the user, generally, in some problem dependent way. The construction of operator dependent coarse spaces with some guaranteed approximation properties is also useful for discretization (upscaling) purposes. We present several approaches to construct AMG coarse spaces targeting finite element discretization problems. We show, based on our theory (necessary conditions), that the AMG constructed coarse spaces corresponding to a good AMG solver have at least some weak approximation properties. For upscaling purposes, i.e., when the coarse spaces are meant as a discretization (upscaling) tool stronger (in energy norm) approximation properties are needed. We describe one (the only known to us) specific AMG method that ensures strong approximation properties.

(May 4) Dr. Matthias Conrad: Experimental Design for Biological Systems — Many problems in biology are governed by a dynamical system of differential equations with unknown parameters. To have a meaningful representation of the system, these parameters need to be evaluated from observations. Experimentalists face the dilemma between accuracy of the parameters and costs of an experiment. The choice of the design of an experiment is important if we are to recover precise model parameters. It is important to know when and what kind of observations should be taken. Taking the wrong measurement can lead to inaccurate estimation of parameters, thus resulting in inaccurate representations of the dynamical system. Each experiment has its own specific challenges. However, optimization methods form the basic computational tool to address eminent questions of optimal experimental design.

In this talk we present a methodology for the design of such experiments that can optimally recover parameters in a dynamical system, biological systems in particular. We show that the problem can be cast as a stochastic bilevel optimization problem. We then develop an effective algorithm that allows for the solution of the design problem. The advantages of our approach are demonstrated on a few basic biological models as well as a design problem for the energy metabolism to estimate insulin resistance.

(May 18) Soeren Bartels: Robust Approximation of Phase Field Models past Topological Changes — Phase field models are often used to describe the evolution of submanifolds, e.g., the Allen-Cahn equation approximates motion by mean curvature and more sophisticated phase field models provide regularizations of the Willmore flow and other geometric evolution problems. The models involve small regularization parameters and we discuss the dependence of a priori and a posteriori error estimates for the numerical solution of the regularized problems on this parameter. In particular, we address the question whether robust error estimation is possible past topological changes. We provide an affirmative answer for a priori error estimates assuming a logarithmic scaling law of the time averaged principal eigenvalue of the linearized Allen-Cahn or Ginzburg-Landau operator about the exact solution. This scaling law is confirmed by numerical experiments for generic topological changes. The averaged eigenvalue about the approximate solution enters a posteriori error estimates exponentially and therefore, critical scenarios are detected automatically by related adaptive finite element methods. The devised scheme extracts information about the stability of the evolution from the approximate solution and thereby allows for a rigorous a posteriori error analysis.

This is joint work with Ruediger Mueller (HU Berlin) and Christoph Ortner (U Oxford).


webmaster@math || Math Department || Numerical Analysis || Seminars