This page is not longer being maintained. Please visit the new UMD Mathematics website at www-math.umd.edu.
DEPARTMENT OF MATHEMATICS
Math Home > Research > Seminars > Numerical Analysis Seminar > [ Search | Contact | Help! ]

(September 6) Professor Noel Walkington: Construction and Analysis of Unstructured Mesh Generation Algorithms - Given a collection of points, edges, and faces, in a bounded region $\Omega \subset \Re^d$, the meshing problem is to construct the coarsest possible triangulation of $\Omega$ from tetrahedra having bounded aspect ratio which ``conforms'' to the input. These requirements can lead to very complicated algorithms; so much so that it can be difficult to verify correctness. I will give an overview of the ideas and issues that arise when constructing algorithms to solve the meshing problem, and will indicate how the mesh generation problem touches on many areas of mathematics and computer science such as approximation/interpolation theory, computational geometry, sphere packing, graph theory, algorithm design.

(September 13) Mr. Jae-Hong Pyo: A Finite Element Gauge-Uzawa Method for the Evolution Navier-Stokes Equations - The Navier-Stokes equations of incompressible fluids are still a computational challenge. The numerical difficulty arises from the incompressibility constraint, which requires a compatibility condition (discrete inf-sup) between the finite element spaces for velocity and pressure. Several projection methods have been introduced for time discretization to circumvent the incompressibility constraint, but suffer from boundary layers. They are either numerical or due to non-physical boundary conditions on pressure. We introduce a first order Gauge-Uzawa method for time discretization coupled with a stable finite element method for space discretization. The method is unconditionally stable, consists of d+1 Poisson solves per time step, and does not exhibit pronounced boundary layer effects. We prove error estimates for both velocity and pressure under realistic regularity conditions via a variational approach, and illustrate the performance with several numerical netscapw&experiments. This work is joint with R. H. Nochetto.

(September 27) Dr. Axel Voigt: From Macro- to Microscale in Modeling Silicon Crystal Growth - In order to produce silicon wafers which are suitable for microelectronic devices a high quality of the silicon material must be ensured. Silicon crystals are grown from the melt and their quality is highly process dependent. We present a mathematical model which relates macroscopic growth conditions to lattice defects in the growing crystal. The heat and mass transfer model is solved using an adaptive finite element method at different time and length scales. The simulation results are verified with experimental measurements.

(October 2) Professor Wolfgang Dahmen: Adaptive Multiscale Methods - A somewhat Different Approach to Discretization - This talk is concerned with the design and analysis of adaptive schemes for a wide class of variational problems that are well-posed in the sense that the induced operator is a topological isomorphism from the ``energy space'' onto its dual. This covers classical elliptic boundary value problems, corresponding boundary integral formulations, transmission problems as well as indefinite problems like mixed formulations and saddle point problems. The main point is that, whenever the the energy space has a wavelet characterization in terms of norm equivalences, this combined with the well-posedness allows one to transform the original problem into an equivalent one that is, however, well-posed in the Euclidean metric. The idea is to apply (conceptually) an iterative scheme to the latter (infinite dimensional) problem. At each stage of the iteration the application of infinite dimensional operators to the current iterate is evaluated adaptively within dynamically updated precision tolerances. We highlight some concepts from nonlinear approximation and harmonic analysis that can be used to show that the schemes exhibit in some sense optimal work/accuracy rates. Some indications concerning the treatment of time dependent and nonlinear problems are given. The theoretical results are illustrated by numerical examples for definite and indefinite problems. The material presented in this talk is based on joint work with A. Cohen, R. DeVore, S. Dahlke and K. Urban.

(October 4) Professor Zhimin Zhang: Superconvergence of the Zienkiewicz-Zhu Patch Recovery for Rectangular Elements - The Zienkiewicz-Zhu patch recovery is a post-processing technique for the finite element method. It recovers gradient quantities in an element from element patches surrounding the nodes of the element. It has been shown numerically that ZZ provides superconvergent recovery on regular meshes and provides recovery with much improved accuracy on general meshes. In this talk, we present a theoretical investigation on this remarkable recovery technique and prove its superconvergence under rectangular grids. >

(October 11) Professor Howard Elman: Performance and Analysis of Preconditioners for the Incompressible Navier-Stokes Equations - Discretization and linearization of the incompressible Navier-Stokes equations leads to linear algebraic systems in which the coefficient matrix has the form of a saddle point problem. In this talk, we describe the development of efficient iterative solution algorithms for this class of problems. In particular, we show how initial approaches used for the Stokes equations, such as the Uzawa algorithm, can be viewed as solvers for the Schur complement system, and we discuss the advantages of explicitly working with the coupled form of saddle point problems and using the Schur complement as a tool for developing preconditioners. We describe some new algorithms derived using this point of view, and we demonstrate their effectiveness for solving both the steady-state and evolutionary Navier-Stokes equations.

(October 25) Professor Du Qiang: Centroidal Voronoi Tessellations (CVTs) and their applications to Numerical PDEs - A centroidal Voronoi tessellation (CVT) is a Voronoi tessellation of a given set such that the associated generating points are centroids (centers of mass) of the corresponding Voronoi regions. We discuss the applications of CVTs to various scientific and engineering problems, in particular, applications to numerical PDEs such as unstructured grid generation/optimization and meshless computing. We also present methods for computing these tessellations.

(November 1) Professor Zhiqiang Cai: The Rate of Corrections and its Application in Scientific Computing - One of the major issues in numerical analysis/scientific computing is the accuracy of the underlying numerical methods. It is a common knowledge that the accuracy of all current numerical methods for differentiation, integration, ordinary and partial differential equations etc are limited by the highest derivative of the underlying approximated function. Since solutions of many partial differential equations are not smooth, low order methods with adaptive mesh refinements seem to be the method of choice for such problems without the prior knowledge of singular behavior. But they are expensive. Obviously, it is desirable to develop higher order accurate methods for non-smooth problems. This seems to be a paradox.

This talk presents an approach for computing higher order accurate approximations for differentiation, integration, ODEs, and PDEs when the underlying approximated functions are not smooth. The key idea of this work is the introduction of the rate of corrections that is of universality, that quantifies the accuracy of the numerical method used, and that is computationally feasible. The rate of corrections is then used to increase the accuracy of the approximation. Also, this approach can be applied to problems without continuum background, such as the sequence of period-doubling bifurcations in chaos.

(November 8) Dr. Maxim Olshanskii: Finite element method for the Navier-Stokes equations: A stabilization issue and iterative solvers - A Galerkin finite element method is considered to approximate the steady incompressible Navier-Stokes equations together with iterative methods to solve a resulting system of algebraic equations. This system couples velocity and pressure unknowns, thus requiring a special technique for handling. Doing this, we will involve the equations in velocity-Bernoulli pressure variables. Also a consistent stabilization method is considered from a new view point. The method suppress instabilities occurring for low viscosity due to the pressure gradient in momentum equations and has also impact on solvers performance. Theory and numerical results in the talk address both the accuracy of discrete solutions and the efficience of solvers.

(November 15) Professor Roger Temam: Robust Control of Turbulent Flows - Robust (or $H^\infty$) control is aimed at controlling systems which are subjected to unpredictable small disturbances. The subject has emerged from space study for the stabilization of light structures in space. It has developed, especially in the context of linear equations and linear operator theory. In this lecture we will give some remarks concerning robust control for nonlinear equations, especially for the control of turbulent flows governed by the Navier Stokes equations.

(November 16) Professor Roger Temam: Mathematical Problems in Meteorology and Oceanography - In this lecture we will present some recent and some less recent mathematical results concerning the equations of the atmosphere, the ocean and the coupled atmosphere ocean (the so-called Primitive Equations first considered by Richardson).

(November 29) Professor Ron DeVore: Adaptive Methods for Solving PDEs - Adaptive methods are often used to numerically resolve PDEs when the solution to the PDE is known to exhibit singularities. Yet, it is rare that there is even a convergence theory for an adaptive numerical method much less an analysis of the decay of error in terms of the number of computations. In this talk, we shall develop an analysis which will allow the a priori determination of whether an adaptive numerical method can perform better than the more standard (and numerically less intensive) linear methods such as standard finite element methods. Since adaptive methods are a form of nonlinear approximation, it is not surprising that this theory has as one of its pillars the fundamental theorems which characterize approximation rates (in terms of smoothness conditions on the target function) of nonlinear methods. The other pillar for this theory is regularity of the solution to the PDE. But the new twist is that the regularity is not measured in the usual Sobolev scale but rather in a scale of Besov spaces commensurate with nonlinear methods. We shall give examples of how to apply this theory to hyperbolic and elliptic problems.

(December 6) Dr. Alan Sussman: Storing and Processing Multi-dimensional Scientific Datasets - Large datasets are playing an increasingly important role in many areas of scientific research. Such datasets can be obtained from various sources, including sensors on scientific instruments and simulations of physical phenomena. The datasets often consist of a very large number of records, and have an underlying multi-dimensional attribute space. Because of such characteristics, traditional relational database techniques are not adequate to efficiently support ad hoc queries into the data. We have therefore been developing algorithms and designing systems to efficiently store and process these datasets in both tightly coupled parallel computer systems and more loosely coupled distributed computing environments.

In this talk, I will discuss the design of two systems, the Active Data Repository (ADR) and DataCutter, for managing large datasets in parallel and distributed environments, respectively. Each of these systems provides both a programming model and a runtime framework for implementing high performance data servers. These data servers provide efficient ad hoc query capabilities into very large (currently up to multiple terabytes) multi-dimensional datasets. ADR is an object-oriented framework that can be customized to provide optimized storage and processing of disk-based datasets on a parallel machine or network of workstations. DataCutter is a component-based programming model and runtime system for building data intensive applications that can execute efficiently in a Grid distributed computing environment. I will present optimization techniques that enable both systems to achieve high performance in a wide range of application areas. I will also present performance results on real applications on various computing platforms to support that claim.