(February 3) Matthias K. Gobbert:
Parallel Computing for Long-Time Simulations
of Calcium Waves in a Heart Cell —
The release of calcium ions in a heart cell can lead to self-organized
diffusion waves, which play a role in controlling the beating of the
heart. This process is modeled mathematically by a system of transient
reaction-diffusion equations. The large number of point sources to model
the injection of calcium ions and the requirement for long-time
simulations on high resolution meshes are among the challenges for
convergent and efficient numerical methods for this problem. The
combination of these challenges is tackled by a special purpose code that
has enabled successful long-time simulations that capture the crucial
physical effect of self-organized wave initiation. However, these
simulations also highlight shortcomings of the underlying model. We will
also show performance results from the new distributed-memory cluster hpc
with InfiniBand interconnect in the UMBC High Performance Computing
Facility (HPCF) that demonstrate the excellent scalability of the code
when using all available cores on the compute nodes with two dual-core
processors. This work is joint with Bradford E. Peercy, also at UMBC. (February 10) Kate Evans:
Fully Implicit solution Framework for Scalable Earth System Models —
Major algorithmic challenges exist for the solution of the newest
generation of Earth system problems including coupled nonlinear physics,
multiple disparate time scales, and scalability requirements. To access a
range of solver capabilities needed to address these issues, a Fortran
interface package of the Trilinos project is implemented within two
components of the Community Climate System Model, the High-order Method
Modeling Environment (HOMME) option of the Community Atmosphere Model
(CAM), and the Parallel Ocean Program (POP) global ocean model component.
HOMME uses a cubed sphere grid and spectral element spatial discretization
to achieve maximum scalability. A fully-implicit Jacobian-Free
Newton-Krylov (JFNK) solution method is used to provide a coherent
nonlinear solution to all dependent variables for several test cases of
the model components. Enhanced accuracy is achieved using a second order
temporal discretization for a given time step size and increased
efficiency is attained for a steady state test case where the solution is
not hindered by time step size constraints. Further, the JFNK solution
framework maintains the same scaling as the explicit method currently used
in the model. The key to efficient integration for the more complete model
using JFNK is a good preconditioner. Several options already under
investigation will be discussed. (February 17) Benjamin Stamm:
Stabilization Strategies for Discontinuous Galerkin Methods —
Most DG-methods require penalization of the solution jumps to improve
continuity of the solution. The added stability leads to optimal convergence with respect to interpolation in the energy-norm. However, although the stabilization terms are consistent they may perturb physical properties such as local mass conservation and a~@ect the conditioning of the linear system. The aim of this talk is to provide techniques that may be used to increase the understanding of stabilization mechanisms in DG-methods and provide more natural schemes. Examples on how stabilization may be reduced in order to not influence local mass conservation, while keeping optimality will be given. Both elliptic and hyperbolic problems are discussed for different DG-formulations and approximation spaces. All theoretical explanations will be followed by numerical illustrations. (February 26) R. B. Kellogg:
A Singularly Perturbed Semilinear reaction-diffusion problem
in a Polygonal Domain —
The semilinear reaction-diffusion equation $-\e^2\D u+b(x,u)=0$
with Dirichlet boundary conditions is considered in a convex
polygonal domain. The diffusion parameter $\eps^2$ is arbitrarily small,
and the ``reduced equation'' $b(x,u_0(x))=0$ may have multiple solutions.
An asymptotic expansion for $u$ is constructed that involves boundary
and corner layer functions. By perturbing this asymptotic expansion,
we obtain certain sub- and super-solutions and thus show the existence
of a solution $u$ that is close to the constructed asymptotic expansion.
The polygonal boundary forces the study of the nonlinear autonomous
elliptic equation $-\D z+f(z)=0$ posed in an infinite sector,
and the well-posedness of the corresponding linearized problem.
The arguments that lead to this
well-posedness may have an independent interest. The material in
the talk is joint work with N. Kopteva (Limerick, Ireland). (March 3) Brian Sutton:
The CS Decomposition: Random Matrix Theory and Computation —
The CS decomposition (CSD) is analogous to the eigenvalue and singular value
decompositions but is specifically designed for partitioned unitary matrices. We will
consider the CSD of a random matrix drawn from the unitary group's Haar measure. The
result not only solves a problem in random matrix theory but also motivates a new
algorithm for computing the CSD. This new algorithm has the benefit of computing both
forms of the CSD: the 2-by-2 CSD for square unitary matrices, as originally described
by G. W. Stewart, and the 2-by-1 CSD for tall and skinny matrices with orthonormal
columns, with application to the generalized singular value decomposition.
(March 10) Karsten Urban:
Numerical Optimization in Engineering Applications —
Several processes in engineering are subject to optimization, e.g. reducing
cost, enhancing effectivity, increasing speed etc. Typically, such optimization
problems are quite complex, nonlinear, high dimensional and often involve
stochastical influences. Even though the mathematical theory of optimization is
quite advanced, not so much is known in such `realistic situations. We show some industrial optimization problems and indicate the mathematical
structure of these problems. We show that relatively simple numerical
optimization schemes already give rise to enormous improvements, but we also
indicate that the use of such simple methods is limited. In order to overcome
such limitations, we describe three techniques, namely automatic
differentiation, reduced basis methods and parameter-dependent optimization. (March 24) Yuesheng Xu:
Fast methods for optimal kernel selection in machine learning —
Optimal kernel methods are popular in machine learning.
However, they suffer from their huge computational costs in finding the
optimal kernels because they lead to full matrices of large orders. To
tackle this bottleneck problem, we propose fast algorithms for selecting
the optimal kernel in the context of machine learning. Results on
convergence and computational complexity will be presented. Numerical
examples will be shown
to demonstrate the approximation accuracy and computational efficiency
of the proposed methods.
(March 31) Radu Balan:
A Nonlinear Reconstruction Algorithm from Absolute Value
of Frame Coefficients for Low Redundancy Frames —
In this talk we present a signal reconstruction algorithm from absolute
value
of frame coefficients that requires a relatively low redundancy.
The basic idea is to use a nonlinear embeding of the input signal
Hilbert space
into a higher dimensional Hilbert space of sesquilinear functionals so
that absolute values of frame coefficients are associated to relevant
inner products in that space. In that space the reconstruction becomes
linear
and can be performed in a polynomial number of steps. (April 14) Tobias von Petersdorff:
Exponential convergence of hp quadrature for integral operators —
Integral equations occur in many applications. For the numerical approximation with a Galerkin method the elements of the stiffness
matrix have to be computed. These are integrals of the type
\int_{S1} \int_{S2} g(x,y) dy dx
where S1, S2 are simplices in Rd, and g(x,y) is a function which
is singular for x=y, but analytic otherwise.The computation of these integrals is the main difficulty in the numerical solution of an integral equation: The singular integrals
have to be computed with high accuracy in order to maintain the convergence rate of the Galerkin method, but using too many function
evaluations increases the complexity of the resulting algorithm. Most methods in the literature rely on a very specific form of the
kernel function. We present an algorithm which
- works for most kernel functions commonly occurring in integral
equations; even in the case where the kernel is not analytic
but only in a Gevrey class away from the diagonal
- works for simplices or parallelotopes in arbitrary dimensions,
and all possible cases where simplices share vertices, edges,
faces, etc.; also for isotropic mesh refinement
- uses only function evaluations of the integrand g(x,y)
- the error is exponentially convergent. Hence only |log(h)|a
function evaluations are required for each integral to achieve a convergence
rate hb for the Galerkin method
This is joint work with C. Schwab (ETH Zurich) and A. Chernov (Univ. Bonn)Preprint (ETH Zurich) (April 21) Malena Espanol:
A Multilevel, Modified Regularized Total Least Norm Approach to Signal
Deblurring
—
In this talk, we present a multilevel method for discrete ill-posed
problems formulated as total least norm problems. We will focus on the
signal deblurring problem where both the blurring operator and the
blurred signal contain noise. Regularized total least norm (R-TLN)
approaches which have beed developed to solve this problem require the
minimization of a functional with respect to the unknown perturbation
in the blurring operator and the desired image. Much of the work to
date in solving R-TLN has required the perturbation operator to have
special structure (e.g. sparsity structure or Toeplitz type structure)
in order to make the minimization problem more computationally
tractable. Our goal is to gain additional efficiency by means of a
multilevel approach. Therefore, we present a multilevel method that
uses the Haar wavelets as restriction and prolongation operators. We
show that the choice of the Haar wavelet operator has the advantage of
preserving matrix structure, such as Toeplitz, among grids, and we
discuss how this can be incorporated into intermediate R-TLN problems
on each level. Finally, we present results that indicate the promise
of this approach on deblurring signals with edges. This is joint work with Misha Kilmer and Dianne O'Leary. (April 28) Rosemary Renaut:
Statistical Properties of the Regularized Least Squares Functional
and a Hybrid LSQR Newton method for Finding the Regularization
Parameter: Application in Image Deblurring and Signal Restoration
—
Image deblurring or signal restoration can be formulated as a data fitting least
squares problem, but the problem is severely ill-posed and regularization is
needed, hence introducing the need to find a regularization parameter. I will
review the background on finding the regularization parameter dependent on the
properties of the regularized least squares functional ||Ax-b||^2_{W_b} + ||D(x-x0)||_{W_x}^2 for the solution of discretely ill-posed systems of equations. It was recently
shown to follow a chi-squared distribution when the a priori information x0 on the
solution is assumed to represent the mean of the solution x. But of course for
image deblurring, we don't wish to assume knowledge of a prior image to obtain the
image. On the other hand, it is possible to obtain statistical properties of the
given image, hence given the mean value of the right hand side, b, the functional
is again a chi-squared distribution, but one that is non-central. These results
can be used to design a Newton method, using a hybrid LSQR approach, for the
determination of the optimal regularization parameter lambda when the weight matrix
is W_x=lambda^2 I. Numerical results using test problems demonstrate the efficiency
of the method, particularly for the hybrid LSQR implementation. Results are
compared to another statistical method, the unbiased predictive risk (UPRE)
algorithm. The method has potential for efficient image deblurring, and current
work is aimed at extending the method for determining local regularization
parameters. Results are illustrated for image deblurring. (April 30) Leslie Greengard:
A New Formalism for Electromagnetic Scattering in Complex Geometry —
We will describe some recent, elementary results in the theory of electromagnetic scattering. There
are two classical approaches that we will review - one based on the vector and scalar potential and
applicable in arbitrary geometry, and one based on two scalar potentials (due to Lorenz, Debye and
Mie), valid only in the exterior of a sphere. In trying to extend the Lorenz-Debye-Mie approach to
arbitrary geometry, we have encountered some new mathematical questions involving differential
geometry, partial differential equations and numerical analysis. This is joint work with Charlie
Epstein.
|