(January 30) Dr. Long Chen:
Multigrid Methods on Adaptive Grids -
In this talk, we shall design and analyze additive and
multiplicative multilevel methods on adapted grids obtained by newest
vertex bisection. The analysis relies on a novel decomposition of newest
vertex bisection which give a bridge to transfer results on multilevel
methods from uniform grids to adaptive grids. Based on this space
decomposition, we will present a unified approach to the multilevel
methods for H1, Hcurl, and Hdiv systems.
(February 6) Prof. Igor Griva:
Case Studies in Shape and Trajectory Optimization: Catenary Problem -
This talk presents a case study in modern large-scale constrained
optimization to illustrate how recent advances in algorithms and
modeling languages have made it easy to solve difficult problems
using optimization software. We consider the shape of a hanging
chain, which, in equilibrium, minimizes the potential energy of
the chain. We emphasize the importance of the modeling aspect,
present several models of the problem and demonstrate differences in iteration numbers and solution time.
(February 13) Prof. Alan Demlow:
Pointwise a posteriori estimates on polygonal domains -
A posteriori error estimates are a fundamental tool in the development of adaptive
finite element methods. In this talk we present a posteriori error estimates for the control of
global and local maximum gradient errors in finite element methods for linear elliptic problems.
In addition to describing a posteriori upper and lower bounds (efficiency and reliability), we
will discuss the effects of ``pollution'' from the singularities that arise at the corners of
polygonal domains upon a posteriori error estimates.
(February 20) Dr. Johnny Guzman:
A Superconvergent Mixed Discontinuous Galerkin Method -
We present convergence properties of a mixed discontinuous Galerkin (DG) method for
second order elliptic problems in several space dimensions. This method uses piecewise defined
polynomials of degree k to approximate both the potential and the flux. We show that the method
has optimal order of convergence k+1 for both variables. This is the only known DG method with
this property. The DG potential approximation superconverges with order k+2 to a suitably defined
local projection of the potential. This allows us to postprocess the DG approximation
element-by-element to get a new approximation to the potential that converges with order k+2.
Finally, we compare this method to the popular Raviart-Thomas and Brezzi-Douglas-Marini methods. (February 27) Dr. Catherine E. Powell:
Robust Preconditioners for Mixed Formulation of Groundwater Flow
Problems -
The simple deterministic Darcy flow problem, in which the
permeability coefficients are assumed to be known explicitly, has
received much attention in the literature over the last two
decades. Linear algebra techniques for tackling it are now mature
and considered to be state of the art. Mixed finite element
methods are recognised to be an invaluable tool for discretisation
and give rise to symmetric and indefinite linear systems of
equations. Users are faced with a plethora of solution
methodologies and transformations to positive definite systems are
popular. However, solving the original indefinite system using
minimal residual schemes is not problematic provided care is taken
in constructing a preconditioner. After reviewing alternative, popular preconditioning schemes for
solving the deterministic Darcy flow problem, we focus on the
parameter-free solution of the full saddle-point problem using
straight-forward, non-nested minimum residual iterations. Two
block-diagonal preconditioners are constructed by exploiting the
well-known fact that the underlying variational problem is
well-posed in two distinct pairs of function spaces. The first
relies on the availability of a specialised multigrid
approximation to a weighted $H(div)$ operator. The second is a
simpler black-box method, the key tool for which is an algebraic
multigrid V-cycle applied to a sparse Schur complement matrix. We
end the talk with a brief look at how the second preconditioning
strategy can facilitate the fast solution of the more realistic,
stochastic formulation of the Darcy flow problem using either a
traditional Monte Carlo approach or the more elegant Stochastic
Finite Element method.
(March 6) Prof. C. A. Duarte:
A High-Order Generalized Finite Element Method for Polycrystals and Branched Discontinuities -
Micromechanical analysis of polycrystals using the classical finite element method (FEM) encounters several difficulties.
The FEM requires the generation of meshes that fit the grain boundaries and are sufficiently refined in the
neighborhood of singularities. The elements in a mesh must also have aspect ratios within acceptable bounds. In
addition, uncertainty about, for example, grain morphology (size, size distribution and shape), requires the analysis
of a very large number of models. This is a daunting task even in the case of two-dimensional models. In this talk,
the generalized FEM (GFEM) for polycrystals recently proposed in [2] will be presented. In this approach, the FE
mesh does not need to mimic the grain morphology since grain boundaries and junctions are described by means
of discontinuous enrichment functions. The background GFEM used in the analysis can be refined as if there were
no grains boundaries. This, combined with the proposed high-order GFEM approximations [1], provide a very
flexible and robust method that can deliver accurate solutions. Applications demonstrating the capabilities and potentialities of the proposed methodology are presented. Under
appropriate loading conditions and temperature, grain boundary sliding is one of the main mechanism behind
anelastic deformation of polycrystalline aggregates. We present a study on the effect of grain morphology on
anelasticity of polycrystals caused by free grain boundary sliding. We carry out a series of simulations on a wide
range of grain morphologies in several arrangements of grains. The proposed GFEM can also be used for branched and intersecting cracks in two and three-dimensions. Three
dimensional simulations demonstrating these capabilities are presented.
(March 13) Dr. Dmitriy Leykekhman:
Recovered Gradient A Posteriori Error Estimators for Parabolic Problems -
A posteriori error estimates are important for the assessment of the quality
of the computed solution to a partial differential equation (PDE) using the
finite element method (FEM), as well as for adaptive mesh refinement. In
this talk I will concentrate on a family of a posteriori error estimators
that have been applied successfully to the solutions of elliptic problems.
These estimators, so called recovered gradients, are based on averaging
schemes to obtain better estimates of the gradient of the PDE solution than
are given by the gradient of the FEM approximation. Recently, theoretical results have been obtained that explain the success of
these a posteriori error estimators on non-structured meshes for elliptic
problems. After discussing these results, I will outline the extension the
recovered gradient estimators to parabolic problems and I will discuss
limitations of these estimators.
(March 27) Dr. Pengtao Sun:
New numerical techniques for steady-state two-phase transport model in fuel cell
-
We present some new numerical techniques for finite elements methods on 2D
steady-state two-phase model in the cathode of polymer electrolyte fuel cell
(PEFC) including opening gas channel and gas diffusion layer (GDL). This
reduced two-phase PEFC model contains the equations of mass, momentum and
water concentration, which are typically modeled by a modified Navier-Stokes
equation with Darcy's drag as a additional source term in momentum for flows
through GDL, and a discontinuous and degenerate convection-diffusion
equation. Based on mixed and standard finite element methods for the
modified Navier-Stokes equation and water concentration equation,
respectively, we employ Kirchhoff transformation to deal with the
discontinuous and degenerate diffusivity in water concentration, and
finite-volume-upwind scheme, streamline-diffusion scheme and Galerkin Least
Squares method to overcome the dominated convection term arising in the gas
channel. On the other hand, as the complement of Kirchhoff transformation
for the case of wet gas channel, we design nonlinear Dirchlet-Neumann
alternating iteration method to deal with jumping nonlinearity across
interface between GDL and gas channel. Numerical experiments demonstrate
that our finite element methods, together with these new numerical
techniques, are accurate and efficient to get reasonable physical solutions
with fast convergence, in comparison with the oscillatory iteration always
taking place in commercial CFD packages (StarCD, Fluent), where the standard
numerical discretization is usually adopted. (April 3) Prof.Dr. Ronald H.W. Hoppe:
A posteriori error estimation of finite element approximations
of pointwise state constrained distributed control problems -
We provide an a posteriori error analysis of finite element approximations
of pointwise state constrained distributed optimal control problems for
second order elliptic boundary value problems. In particular, we derive
a residual-type a posteriori error estimator and prove its efficiency and
reliability up to oscillations in the data of the problem and a consistency
error term. In contrast to the case of pointwise control constraints, the
analysis is more complicated, since the multipliers associated with the
state constraints live in measure spaces. The analysis essentially makes
use of appropriate regularizations of the multipliers both in the continuous
and in the discrete regime. Numerical examples are given to illustrate the
performance of the error estimator. (April 10) Dr. Luca Heltai:
Distributional Body Force Densities in Finite Element Approximations of
Continuum Mechanics Problems -
A number of problems in continuum mechanics are known for their singular
behavior and for the difficulties connected to their numerical simulation.
Some of them can be reformulated by introducing a distributional body force
density that induces the desired singularities on the solution. We present this
idea in two different frameworks: fluid structure interaction and crack
formation in linearly elastic materials. The natural numerical framework for
the simulation of these distributional problems is the Finite Element Method. A
number of examples that show the potentiality of this technique are presented
and compared with existing methodologies.
(April 17) Prof. Junping Wang:
A Finite Element Method for the Navier-Stokes Equation Using H(div) Elements -
This talk is concerned with numerical methods for stationary
Navier-Stokes equations by using H(div) type finite elements. The main
feature of the method is the achievement of numerical approximations for
the solution of the NS equation that conserve mass analytically. Some
error estimates are derived for the numerical scheme in various Sobolev
norms under certain assumptions. (April 24) Dr. Radu Balan:
Algebras of Time-Frequency Shift Operators with Applications to
Wireless Communications -
In this talk I present results on various Banach *-algebras of
time-frequency shift operators with absolutely summable
coefficients, and two applications. The L^1 theory contains
non-commutative versions of the Wiener lemma with various norm
estimates. The L^2 theory is built around an explicit formula of
the faithful trace for this algebra. The Zak transform turns out
to be an effective computational tool for practical cases. One application is the Heil-Ramanathan-Topiwala conjecture that
states that finitely many time-frequency shifts of one L^2
function are linearly independent. This turns to be equivalent to
the absence of eigenspectrum for finite linear combinations of
time-frequency shifts. I will prove a special case of this
conjecture. Next I present two applications in Wireless communications. One
application is the channel equalization problem. I will show how
to use the Wiener lemma and the Zak transform to effectively
compute this inverse. A second application concerns canonical
communication channels. The original canonical channel model
allows for designing the Rake receiver. Recently Sayeed and
Aazhang obtained a time-frequency canonical model and designed
its Rake receiver. I will comment on its operator algebra meaning
and ways to generalize it to other canonical models using
time-scale or frequency-scale shifts.
(May 3) Prof. Emmanuel J. Candes:
Applications of Compressive Sampling to Error Correction -
``Compressed Sensing'' or ``Compressive Sampling'' (CS) is a new
sampling or sensing theory which goes somewhat against the
conventional wisdom in signal acquisition. This theory allows the
faithful recovery of signals and images from what appear to be highly
incomplete sets of data, i.e. from far fewer data bits than
traditional methods use. It is believed that this phenomenon may have
significant implications. For instance, CS may come to underlie
procedures for sensing and compressing data simultaneously and much
faster. In this talk, we will present the basic tenets of this new
sampling theory and introduce applications in the area of error
correction. Consider a stylized communications problem where one wishes to
transmit a real-valued signal x, a block of n pieces of information,
to a remote receiver. We ask whether it is possible to transmit this
information reliably when a fraction of the transmitted message is
corrupted by arbitrary (malicious) gross errors, and when in addition,
all the entries of the message might be contaminated by smaller errors
(e.g. quantization errors). We show that if one encodes the information
as Ax where A is a suitable m by n matrix, there are a couple of
decoding schemes which allow the recovery of the block of n pieces
of information x with nearly the same accuracy as if no gross errors
occur upon transmission (or equivalently as if one has
an oracle supplying perfect information about the sites and amplitudes
of the gross errors). In the special case where there are only gross
errors, the decoded vector is provably exact. The key point is that
both decoding strategies are very concrete and only involve solving
simple convex optimization programs, either a linear program or a
second-order cone program. Numerical simulations show that the
encoder/decoder pair performs remarkably well.
(May 4) Prof. Emmanuel J. Candes:
Compressive Sampling -
One of the central tenets of signal processing is the Shannon/Nyquist sampling theory: the number of samples needed to reconstruct a signal without error is dictated by its bandwidth-the length of the shortest interval which contains the support of the spectrum of the signal under study. Very recently, an alternative sampling or sensing theory has emerged which goes against this conventional wisdom. This theory allows the faithful recovery of signals and images from what appear to be highly incomplete sets of data, i.e. from far fewer data bits than traditional methods use. Underlying this metholdology is a concrete protocol for sensing and compressing data simultaneously. This talk will present the key mathematical ideas underlying this new sampling or sensing theory, and will survey some of the most important results. We will argue that this is a robust mathematical theory; not only is it possible to recover signals accurately from just an incomplete set of measurements, but it is also possible to do so when the measurements are unreliable and corrupted by noise. We will see that the reconstruction algorithms are very concrete, stable (in the sense that they degrade smoothly as the noise level increases) and practical; in fact, they only involve solving very simple convex optimization programs. An interesting aspect of this theory is that it has bearings on some fields in the applied sciences and engineering such as statistics, information theory, coding theory, theoretical computer science, and others as well. If time allows, we will try to explain these connections via a few selected examples. (May 8) Dr. Xiliang Lu:
Error analysis for Navier-Stokes Equations Based on a Sequential Regularization Formulation -
The Navier-Stokes equations are not a well-posed problem from the view point of
constrained dynamical systems. A reformulation to a better posed problem is needed
before solving it numerically. The sequential regularization method (SRM) is a
reformulation which combines the iterative penalty method with a stabilization method
in the context of constrained dynamical systems and has the benefit of both methods.
It also can be viewed as an approximated projection reformulation. We will study the
existence and uniqueness for the solution of the SRM and provide convergence of the
solution of the SRM to the solution of the Navier-Stokes equations. We also give the
error estimates for the discretized SRM formulation.
|