MATHEMATICS AND COMPUTATION IN IMAGING
SCIENCE AND
INFORMATION PROCESSING
(July  December 2003)
~ Abstracts ~
Huang's Empirical Mode
Decomposition and splines
Sherman D. Riemenschneider, Dept. of Mathematics, West
Virginia University, USA
Cubic splines play a major role in the Empirical Mode
Decomposition introduced by Norden Huang for nonstationary
and nonlinear signals as means to define an instantaneous
frequency via the Hilbert Spectrum. In this talk we give some
elementary results about the Hilbert transforms of splines,
and other roles that splines may play in the obtaining an EMD.
«
Back
Generating a good triangular mesh
Gilbert Strang, Dept. of Mathematics, Massachusetts
Institute of Technology, USA
An essential first step in many problems of numerical
analysis and computer graphics is to cover a region with a
reasonably regular mesh.
We describe every line of a short MATLAB code that begins
with a "distance function" to describe the region:
d(x,y) is the distance to the boundary (with d < 0 inside
the region). The code must decide the location of nodes and
the choice of edges. At each iteration the mesh becomes a
truss (with forces in the edges that move the nodes). Then the
Delaunay algorithm resets the topology of the triangles. The
input includes a size function h(x,y) to give desired
meshlengths over the region.
The code is adaptable and public. It extends directly to n
dimensions. The 2D meshes are remarkably regular. The theory
is not complete.
«
Back
Orthogonal frames of translates
Eric S. Weber, Dept. of Mathematics, University of Wyoming,
USA
A frame sequence (or Bessel sequence) defines an operator,
called an analysis operator, which maps the Hilbert space into
a space of coefficient sequences. The composition of the
adjoint of the operator with an analysis operator from another
frame sequence is a bounded operator on the Hilbert space
which is a sum of rank one tensors. This operator, sometimes
called a ``Mixed Dual Grammian'', is an important operator in
frame theory. The duality condition of two frames corresponds
to this operator being the identity.
Two Bessel sequences are said to be orthogonal if this
operator is the 0 operator. The motivation for orthogonal
Bessel sequences comes from the following problems:
1) Given a frame, can we find all of the dual frames?
2) How can frames be used in orthogonal division multiple
access communications?
3) How can perfect reconstruction of vectors in a subspace
be accomplished when the vectors are not in the subspace?
We shall describe these problems, and also describe some
other applications which arise from the problems, such as
cryptography and watermarking. We shall then give
characterizations of when two Bessel sequences are orthogonal
when the Bessel sequences have the form of translates of a
finite number of functions in L^{2}(R^{d}). The
characterizations are applied to Bessel sequences which have
an affine structure, a quasiaffine structure, a Gabor
structure, and also an irregular structure. Moreover, we
characterize perfect reconstruction (duality) of subspace
frames for translation invariant subspaces of L^{2}(R^{d}).
«
Back
Uncertainty principles and
wavelets
Tim N. T. Goodman, Dept of Mathematics, The University of
Dundee
Heisenberg’s uncertainty principle gives a lower bound on
the ‘uncertainty product’ of a function, a measure of its
spread in both time and frequency. It is seen that for certain
sequences of functions satisfying selfsimilarity equations,
the uncertainty products converge to the minimum. We then
introduce the related wavelets: functions which allow
decomposition simultaneously in time and frequency with
orthogonality between different frequencies. For such bandpass
filters the uncertainty principle is modified and we examine
the convergence to the minimum of the appropriate uncertainty
products of the wavelets. We also discuss uncertainty
principles for periodic functions. In these cases the minimum
uncertainty is not attained and we consider sequences of
trigonometric polynomials whose uncertainty products converge
to the minimum. These results are extended to functions on
general spheres, and we also suggest an uncertainty principle
on the circle for which minimum uncertainty is attained.
«
Back
CT and MR imaging
Chye Hwang Yan, DSO National Laboratories and Borys Shuter,
Department of Radiology, National University Hospital
The purpose of this tutorial is to present a technical
overview on computed tomography and magnetic resonance
imaging. It will cover the underlying physics and also the
mathematical techniques used for image reconstruction. A
portion of the tutorial will address the limitations, such as
artifacts and noises, of these imaging modalities. The
tutorial will end with a survey on the current state of the
art CT and MR machines.
«
Back
Image processing and soft computing
techniques for medical applications
Kap Luk Chan, School of Electrical and Electronic
Engineering Nanyang Technological University
Early detection and treatment of colorectal cancer helps
reducing the number of deaths from this disease. It is thus
meaningful to develop image processing and analysis techniques
and apply them to endoscopic (colonscopic) images acquired
through colonoscopy for this purpose. In this presentation, we
report several techniques developed for detection of
abnormalities in the colon from colonoscopic images which may
assist subsequent analysis for detection of colorectal cancer
by doctors and health care providers. We have developed
edgebased techniques for abnormality detection through edge
curvature analysis, regionbased techniqiues through region
segmentation using neural network and fuzzylogics and etc.
Both color and textural image properties are explored. In a
more recent attempt, a new conceptlearning approach is taken
to overcome the ambiguity in the problem definition. Through
learning from image data labeled by the medical experts,
diagnosis models can be extracted automatically. We evaluate
our methods on a set of colonoscope images. The experimental
results show the feasibility of our methods and suggest
promising applicability for clinical implementation.
«
Back
Current clinical applications and
limitations of CT/MR imaging
Poh Sun Goh, Department of Radiology National University
Hospital
This talk covers the advantages and limitations of CT/MR
Imaging.
«
Back
Advanced signal processing
algorithms for fMRI
Sadasivan Puthusserypady, Department of Electrical and
Computer Engineering National University of Singapore
fMRI is a new technique for localising brain activity;
whereas independent component analysis (ICA) is a relatively
new technique for blind source separation (BSS). Principle of
brain modularity states that different region of the brain
perform different function independently, thus spatial ICA can
be applied on fMRI. Brain signal has proven to be nonlinear,
hence applying nonlinear ICA on fMRI signal should arrive at a
better results. However, nonlinear ICA has the problem of
nonunique solutions therefore postnonlinear (PNL) ICA model
is used. Furthermore, PNLICA model closely resemble to the
balloon model. The balloon model is a biomechanical model of
the haemodynamic of the blood system, which includes the
transient states.
«
Back
Automated retinal image analysis
Sanjaya P.M.D. Perera, School of Computing, National
University of Singapore
Diabetic retinopathy is a major cause of blindness in the
world. Regular screening and timely intervention can halt or
reverse the progression of this disease. Digital retinal
imaging technologies have become an integral part of eye
screening programs worldwide due to their greater accuracy and
repeatability in staging diabetic retinopathy. Existing
automated techniques to detect diabetic retinopathy lesions
and retinal structures are neither adaptable nor sufficiently
sensitive and specific for reallife screening application.
This talk presents accurate and robust algorithms for
automated detection of hard exudates, retinal vessels, optic
disc and macular region. Hard exudates detection algorithm
which incorporates domain knowledge has achieved 100%
sensitivity and 72% specificity with 543 consecutive digital
retinal images. Large retinal vessel detection algorithm and
optic disc detection algorithm has achieved above 90% accuracy
and finally macular region detection has the ability to
classify hard exudates further into macular exudates with 92%
sensitivity and 92% specificity.
«
Back
Nonrigid registration in MRI
mammography
Ek Tsoon Tan, Department of Electrical and Computer
Engineering National University of Singapore
Magnetic resonance imaging (MRI) has been suggested as an
alternative to xray mammography because of its 3D tomography,
better contrast between tissue types, and because it is
radiationfree. The use of a contrast agent for highlighting
regions with malignant lesions results in an intensity
increase between the pre and postcontrast scans. Image
registration is required to capture involuntary patient
movement and the non uniform intensity increases, so as to
correctly identify malignant lesions. Recent work in this
volume registration task employed rigid and nonrigid models
with entropybased cost functions. The long computational time
is the main drawback for entropybased algorithms.
The talk presents a new adaptive Normalized Mutual
Information (NMI) optimization scheme that will significantly
reduce the computational requirements. The new scheme makes
use of the current histogram to (1) perform a clever
prediction of the change introduced by the contrast agent, and
(2) adapts the standard deviation of the Parzen window for
each intensity level.
«
Back
CramerRao Lower Bound for parameter
estimation of multidimensional NMR data
ZhiPing Lin, School of Electrical and Electronic
Engineering Nanyang Technological University
Nuclear magnetic resonance (NMR) is one of the two major
experimental techniques for determining proteins' 3D
structures. One of the key steps in multidimensional NMR data
processing is parameter estimation. This talk presents a new
method for the calculation of the Fisher information matrix,
the inverse of which gives the CramerRao lower bound. Our
method is based on a novel concept of derivative system and
solutions to certain Lyapunov equations. The techniques are
illustrated by a simulation example from NMR spectroscopy.
«
Back
On fast algorithms to compute
canonical windows for gabor frames
A.J.E.M. Janssen, Philips Research Laboratories, The
Netherlands
We present iterative algorithms, some of which were already
known to Feichtinger and Strohmer, for the computation of the
canonical tight (g^{t}) and dual (g^{d}) window associated with a Gabor frame (g, a, b). To this end we develop a mechanism for
proposing iterative algorithms [involving the current frame
operator and (for g^{t}) its inverse] with a desired conver
gence rate. The algorithms so obtained with cubic convergence
and using frame operators only (no inverses) are conditionally
convergent, requiring a minimal value of the initial frame
bound ratio A/B. These minimal values can be estimated by
using a sharp form of Kantorovich's inequality.
«
Back
Tight frames and their symmetries
Shayne Waldron, Dept. of Mathematics, University of
Auckland
Frame representations are useful because they are
technically similar to orthogonal expansions (they simply have
more terms) and can be constructed to have desirable
properties that may be impossible for an orthogonal basis,
e.g., in the case of wavelets certain smoothness and small
support properties.
Here we show that frames are of interest where the
desirable properties are symmetries of the underlying
space (which an orthogonal basis cannot express). One
important example is orthogonal polynomials of several
variables for a weight which has some symmetries. It turns out
that there are a finite number of tight frames of n
vectors in d dimensions generated by an abelian
subgroup of its symmetries. We discuss how this list of so
called harmonic frames was calculated using the
symbolic algebra package MAGMA, and how to use it.
«
Back
Recent progress in iteratively
regularized image reconstruction
Martin HankeBourgeois, JohannesGutenbergUniversität,
Germany
We survey various options for the regularized
reconstruction of blurred and noisy images, given complete
information about the blurring operator. The topics we will
focus on include preconditioning, in particular multilevel
preconditioning, and the possibility of regularization by
iteration or by hybrid combinations of the Lanczos method with
other regularizing schemes. We shall also briefly comment on
the delicate issue of how to incorporate nonnegativity
constraints.
«
Back
A Multigrid technique for restoring
images with Dirichlet boundary conditions
Stefano Serra Capizzano, Dipartimento di Chimica, Fisica, e
Matematica Universita dell'Insubria  Sede di Como
We propose a method of multigrid type which has an optimal
computational cost and whose convergence speed is robust both
with respect to the size of the problem and to the
regularization parameter.
«
Back
Boundary conditions and fast
deblurring models
Stefano Serra Capizzano, Dipartimento di Chimica, Fisica, e
Matematica Universita dell'Insubria  Sede di Como
Antireflecting boundary conditions (ARBC) for blurring
models were recently introduced (see SerraCapizzano, SISC, to
appear): the idea seems promising both from the computational
and approximation viewpoint. The key point is that, under
certain symmetry conditions, the ARBC matrices can be
essentially simultaneously diagonalized by the (fast) sine
transform DST I and, moreover, a C^{1} continuity at the
border is guaranteed in the 1D case.
Here we give more details for the 2D case and we perform
extensive numerical simulations which illustrate that the
ARBC can be superior to Dirichlet, periodic and reflective
BCs in certain applications.
«
Back
Applications of 3D digital
technologies in cultural heritage
Hongbin Zha, Center for Information Science, National
Laboratory on Machine Perception, Peking University, China
In the presentation, I will report new progress in our
research on applications of 3D digital technologies in
heritage digitization and digital museum. The main topics
include: 3D modeling of cultural artifacts, relics and sites;
digital geometry processing for large heritage models;
photorealistic visualization of cultural scenes.
«
Back
The fast and parallel processing
for bitplane coding in JPEG2000
Chao Xu, Center for Information Science, National
Laboratory on Machine Perception, Peking University, China
Abstact: In JPEG2000 image compression standard the
fractional bitplane coding is adopted for the context forming
for arithmetic coding. The fractional bitplane coding
generates more quality and resolution scalability, while makes
more computation difficulty for fast implementation. In this
paper, we present a parallel processing for bitplane coding
that reduce the computation time greatly. A preprocessor is
proposed to make each bitplane coding independent for
parallel processing. The encoder is divided into multiple
modules to analyse, and a method of assignment modules in
terms of the statistic of module application is proposed. The
parallel processing is suitable for architecture
implementation. With increasing about 4 times of circuit
resource it can achieve 8 times of speedup.
«
Back
Omnidirectional vision based motion
detection for multiple human bodies
Hong Liu, Center for Information Science, National
Laboratory on Machine Perception, Peking University, China
Computer vision systems for moving targets tracking are
widely used in many applications such as smart surveillance,
virtual reality, advanced user interfaces, motion analysis and
modelbased image coding. One of the most important issues in
visual surveillance is to expand the view fields of visual
sensors. Since traditional cameras suffer from the problems of
having limited visual fields, lose of global visual
information, hard to control for active one, we have proposed
a new realtime visual system based on omnidirectional
vision, for detecting and tracking multiple human bodies in
indoor environment. First, an omnidirectional camera is used
to obtain 360º view images of the scene to enlarge the
monitored area. Then, omnidirectional images are changed into
the cylindrical panoramic images. Secondly, an adaptive
subtraction method is utilized to improve the robustness for
segmenting moving targets in dynamic environments. Moreover,
shape and color information are used for tracking and
identifying multiple peoples whether they are in groups or
separated from each other. Experiments show that the system
can track multiple moving human bodies with large view fields
in a dynamic environment in realtime. Finally, potential
applications and research plan on omnidirectional vision will
be discussed.
«
Back
Mband wavelet construction and
compound image compression
Tong Lin, Center for Information Science, National
Laboratory on Machine Perception, Peking University, China
Two parts of work are reported in this talk. One is on the
algebraic construction of Mband orthogonal wavelets. A system
of constraint equations is obtained for Mband orthonormal
filters, and then a solution based on SVD is developed to
produce innumerable wavelet bases of given length. Also the
property of 2 vanishing moments is integrated into our wavelet
construction process, which provides another way to compute
2regular Mband filter banks. Another part is on compound
image compression for realtime computergenerated screen
transmission. A blockbased lossless coding algorithm is
designed for textual and graphical contents, which contains
several techniques such as shapebased coding, palettebased
coding, runlength coding, and palette reuse. The coded
bitstream is fed into a LZW coder for further compression.
Then we integrate the lossless algorithm with lossy JPEG into
a hybrid image compression scheme. The key is to extract the
text and graphics as foreground from photographic pictures
within a low complexity constraint. The shapebased coding in
our lossless algorithm elegantly provides the intelligence to
extract computergenerated text/graphics without efforts.
«
Back
Medical image series segmentation
based on modified active contour model
Jie Tian, Institute of Automation, Chinese Academy of
Science, National Laboratory of Pattern Recognition, China
Abstract: In this paper, we propose an algorithm for the
semiautomatic segmentation of medical image series based on
the combination of the live wire algorithm and the active
contour model. We modified the traditional live wire algorithm
by combining watershed transform and obtain accurate
segmentation of one or more slices in a medical image series
by the live wire algorithm. Based on the segmentation of
previous slices, the computer will segment the nearby slice
using the modified active contour model automatically. To make
full use of the correlative information between contiguous
slices, we introduce a grayscale model to the boundary points
of the active contour model to record the local region
characters of the desired object in the segmented slice and
replace the external energy of the traditional active contour
model with the energy decided by the likelihood of the
grayscale model. Moreover we introduce the active region
concept of the snake to improve the segmentation accuracy and
reduce the complexity of dynamic program. Experiment shows
that our algorithm can obtain the boundary of the desired
object from a series of medical images quickly and reliably
with only little user intervention.
«
Back
Wavelettype expansions
Lasse Borup, Dept. of Mathematical Sciences, Aalborg
University, Denmark
We consider an orthonormal basis for
L_{2}(R)
consisting of functions that are well localized in the spatial
domain and have compact support in the frequency domain. The
construction is based on smooth local cosine bases and is
inspired by F. Meyer and R. Coifman's brushlets, which are
local exponentials in the frequency domain.
We will see that brushlet systems associated with an
exponential type partition of the frequency axis, bear some
resemblance to wavelet systems. E.g. such a system constitutes
an unconditional basis for various function spaces  Lebesgue,
Besov, TriebelLizorkin, Hardy, BMO  and the norm in these
spaces can be expressed in terms of the expansion
coefficients.
If instead, the brushlet system is associated with a
partition of the frequency axis based on a polynomial rule
with a parameter α (0<α≤1)
we will see that the system forms an unconditional
basis for the αmodulation spaces introduced by P. Gröbner.
«
Back
Gabor analysis, Rieffel induction
and Feichtinger's algebra as a link
Franz Luef, NuHAG, Dept. of Mathematics, University of of
Vienna
Download/View PDF
«
Back
Frames and flexibility
Ole Christensen, Dept. of Mathematics, Technical University
of Denmark
The increased flexibility (compared to orthonormal bases)
is often an argument for using frames. However, in most cases
we also want our frames to have some structure, and there are
cases where this additional constraint limits (or removes) the
freedom. For this reason we seek to extend classical frame
theory by allowing duals belonging to a different space than
the frame.
Given a frame for a subspace W of a Hilbert space H, we
characterize the set of oblique dual frame sequences (i.e.,
dual frame sequences that are not constrained to lie in W). We
then consider frame sequences in shift invariant spaces, and
characterize the translation invariant oblique dual frame
sequences. For a given translation invariant frame sequence an
easily verifiable condition on another shiftinvariant frame
sequence implies that its closed linear span contains a
generator for a translation invariant dual of the frame
sequence we start with; in particular, this result shows that
classical frame theory does not provide any freedom if we want
the dual to be translation invariant. In the case of frame
sequences generated by Bsplines we can use our approach to
obtain dual generators of arbitrary regularity
«
Back
Camera calibration and pose
determination from N points
Yihong Wu, National Laboratory of Pattern Recognition,
Institute of Automation, Chinese Academy of Sciences
PerspectiveNPoint camera calibration and pose
determination, simply called PNP problem, has attracted lot of
attention in the literature. PNP problem has different
definitions, and different definition usually implies
different set of solutions, different computational
complexity, and different robustness.
When N<3, the PNP problem is under constrained, and When
N>=6 the PNP problem can be linearly determined for the
intrinsic parameters and pose parameters. When N=3,4,5, the
constraints from N points are not enough to determine all the
intrinsic parameters and pose parameters. So, usually, the
literatures assume that the intrinsic parameters are known,
then determine the camera pose; or determine the intrinsic and
pose parameters from prior knowledge on the intrinsic
parameters.
In this work, with known intrinsic parameters, we
investigate the PNP problem for N=3,4,5 from both geometric
and algebraic standpoints, and has the following
contributions: Firstly, we formally proved that the PNP
problem under distancebased definition is equivalent to the
PnP problem under orthogonaltransformationbased definition
when N>3, and is equivalent to the PnP problem under
rotationtransformationbased definition when N=3. In other
words, if the rotation matrix in perspective matrix is relaxed
to an orthogonal matrix, then the distancebased definition
and the transformation based definition become equivalent when
N>3. Secondly, we introduce a depthratio based technique to
solve PnP problem. This technique is shown to be advantageous
over the traditional techniques of either directly determining
the distances or transformation matrix. Thirdly, from the
geometrical point of view, we obtained the upper bounds of the
PnP problem under different definitions, and provided some
insights into the fact that the different bounds are generally
associated with different definitions. Fourthly, a
construction of multiple solutions is presented for P3P
problem. Lastly, the degenerated cases, i.e., the coplanar
control points, or collinear control points, are discussed.
Surprisingly enough, it is shown that if all the control
points are collinear, the PnP problem under distancebased
definition has a unique solution, but the PnP problem under
transformationbased definition is determined only up to one
free parameter.
«
Back
Contentbased visual information
retrieval
Yu Jin Zhang, Dept. of Electronic Engineering, Tsinghua
University, China
While advances in compression algorithms have relaxed the
storage and transmission requirements to some extent, the
large volume of images and videos makes it difficult for a
user to browse through the entire database. On the other side,
when such a collection is available or accessible, the
question asked by a user is often how a particular image or a
clip of video is located. In the last ten years, researches on
contentbased image and video retrieval have made a number of
progresses and provided some suitable ways to answer
efficiently the above question. This talk will be based on a
recent book “Contentbased Visual Information Retrieval”.
Using the information collected for writing this book as well
as the research experiments and results that supply suitable
materials for this book, a general overview in this research
direction will be introduced, different techniques for
querying and retrieving image as well as for organizing and
searching video will be described. In addition, some existing
problems and further research trends will be discussed.
«
Back
A QuasiWavelet algorithm for
second kind boundary integral equations
Hanlin Chen, Institute of Mathematics, Academia Sinica and
Silong Peng, National Asia Design Engineering Center, Chinese
Academy of Science
In solving integral equations with a logarithmic kernel, we
combine the Galerkin Approximation with periodic quasiwavelet
(PQW). We develop an algorithm for solving the integral
equations with only O(N log N) arithmetic operations where N
is the number of knots. We also proved that the Galerkin
approximation has a polyno mial rate of convergence.
«
Back
Spectral radii of refinement and
subdivision operators
Victor Didenko, Faculty of Science, Universiti Brunei
Darussalam
The spectral radii of refinement and subdivision operators
can be estimated by using norms of their symbols. In several
cases the exact value of the spectral radius is found.
«
Back
The construction of wavelets and
its some applications to pattern recognition
Daren Huang, Sun Yatsen University
In this paper, the construction of wavelets, especially
multiband wavelets, and their corresponding properties are
discussed. As wavelet applications, multiband wavelets are
used to classify textures successfully; an algorithm for image
recognition, such as face images, is developed based on
nonlinear wavelets. Also, researches on the application of
wavelets to similar numeral discrimination are presented.
«
Back
Radial basis, radial basis interpolation and meshless
method
Zongmin Wu, Dept. of Mathematics, Fudan University
This talk will introduce the radial basis function, which
is composed of linear combination of shifts of radial function
Φ(x). Radial basis
interpolation method and related topics is reviewed,
especially the method, how to interpolate the linear
functional data is introduced. The radial basis interpolations
scheme for linear functional data will be applied to solve
linear partial differential equation as a numerical meshless
method. Existence, uniqueness as well as the convergence and
related topics for the method are discussed.
«
Back
Convergence rates of subdivsion
schemes associated with nonhomogeneous refinement equation
Song Li, Zhejiang University
Download/View PDF
«
Back
On the spectrums of frame
multiresolution analyses
Hong Oh Kim, Division of Applied Mathematics, KAIST
We characterize the spectrum of the 'central' space of an
FMRA and then give a new condition for an FMRA to admit a
single wavelet solely in terms of the spectrum of an FMRA.
This improves the results prevously obtained by Benedetto and
Treiber and also by us. Our methods and results are applied to
the problem of the 'containments' of FMRAs in MRAs. We first
prove that any FMRA is always contained in an MRA, and then we
characterize those MRAs that contain 'genuine' FMRAs in terms
the unique masks of the MRAs and the spectrums of the central
spaces of the FMRAs to be contained.
«
Back
Adaptive short time Fourier
transform for Whale call classification
Zhenyong Lin, Temasek Laboratories
Shorttime Fourier transform is a classic timefrequency
analysis method for nonstationary signal processing and
representation. However, since its time and frequency
resolution is fixed, it is not adaptive to signals. In this
report, we developed an adaptive shorttime Fourier transform
algorithm, and use it as an analysis tool to underwater whale
call classification. Experiments show that our method can
achieve excellent results.
«
Back
Dual Gramian analysis
Zuowei Shen, Dept. of Mathematics, National University of
Singapore
The dual Gramian analysis has been used in the study of the
frame properties of shift invariant systems. It allows us to
decompose the frame operator into a collection of simpler
operators (fibers) in the Fourier domain. Each one of the
`fiber’ operators acts from a sequence space to the same
sequence space and its matrix representation can be explicitly
described in terms of the Fourier transforms of the generators
of the shift invariant system. Application to the Gabor
systems which are shift invariant shows the power of the
analysis. However, it cannot be directly applied to many other
systems such as the wavelet systems, which are not shift
invariant. This leads us to consider the dual Gramian analysis
in a more general setting, i.e. general shift invariant
systems which include wavelet system as a special case. In
this talk, I will give the development and the recent results
on the dual Gramian analysis.
«
Back
A Panoramic View on Gabor Analysis
(from linear algebra to pseudodifferential operators)
Hans G. Feichtinger, Dept. of Mathematics, University of
Vienna, Austria
It is the purpose of this series of lectures (the duration
and choice of specific topics will be flexible, depending on
the audience) to point the relevance of Banach frames and
Riesz projection bases in the context of Gabor analysis, and
to provide background on the importance of certain Banach
spaces of functions (called Wiener amalgam spaces) in the
treatment of families of function spaces (socalled modulation
spaces) described by the timefrequency behaviour of their
elements. These spaces in turn are relevant for the
description of the behaviour of various kinds of operators, in
particular slowly time varying systems.
Rough outline:
A) From linear algebra to Gabor analysis over finite
Abelian groups (generating systems, frames, pseudoinverse and
frames, Gram matrix and biorthogonal systems; specific
properties of coherent systems; redundancy; algebraic
properties of Gabor systems). Introductory part (avoiding
questions of convergence and continuity)
B) Continuous transforms (in particular STFT), sampling,
the role of Wiener amalgam spaces and their basic properties,
modulation spaces defined by means of the STFT (12 hours)
C) Fundamental properties of GaborWeylHeisenberg systems
(Janssen representation, fundamental identity, adjoint TFlattice,
RonShen principle, etc.) and the role of modulation spaces in
the description of these facts; in particular the usefulness
of the Gelfand triple (S_0,L_2,S_0') (Feichtinger's algebra
and its dual). (1 hour)
D) Description of operators using timefrequency concepts
(e.g. the KohnNirenberg symbol resp. the spreading function),
indications on pseudodifferential operators (Heil, Groechenig)
in the context of timefrequency analysis (1+ hour)
As time permits we may come also to a discussion of the
following topics: E) Possible continuation (rather for
discussions): irregular Gabor families, questions of excess
and redundancy, changing the lattice constants, automatic
continuity properties of the frame operator and much more
HINT: final talk on Gabor Multipliers will make use of many
of the results described in this tutorial session.
A theory of Gabor multipliers
Hans G. Feichtinger, Dept. of Mathematics, University of
Vienna, Austria
Gabor multipliers are linear operators which arise by
pointwise multiplication of Gabor coefficients of a
function/distribution by a certain sequence of real numbers
(over the TFlattice in use). We will emphasize questions of
the following type:
 what are the mapping properties of such operators (e.g.
in term of modulation spaces)
 what about the uniqueness of the representation of a
Gabor multiplier
 approximation properties of general mappings by Gabor
multipiers
 similarity of Gabor multipliers using similar lattices
«
Back
Gabor Deconvolution: the
recovery of seismic reflectivity in an attenuating medium
Gary Margrave, Dept. of Geology and Geophysics, The
University of Calgary
Seismic wave propagation is associated with a progressive
loss of frequency content as the wave propagates. This means
that the reflected waveforms recorded at the earth's surface
form a seismogram with nonstationary spectral character. While
Wiener deconvolution was designed to sharpen these reflections
by estimating and inverting their waveform, its inherent
stationarity means that it can only succeed in a limited and
local sense. We present an extension of the Wiener process
that is accomplished in the Gabor transform domain. Using a
pseudodifferential operator model for the attenuation process,
we show that Gabor deconvolution can estimate and invert a
nonstationary waveform with surprising success. In essence, we
use the Gabor transform to accomplish an approximate
factorization of the pseudodifferential operator. The
resulting reflectivity estimates have greater resolution than
can be achieved with stationary processes. Tests on real
seismic data have produced excellent images. In a controlled
experiment, in which a seismic wavefield was measured
simultaneously in a borehole and on the earth's surface, we
find that the wavforms estimated in the borehole closely match
those estimated by Gabor deconvolution on the surface data.
«
Back
The representation of linear
operators by Gabor multipliers
Michael P. Lamoureux, Dept. Mathematics and Statistics,
University of Calgary
For a given pair of analysis and synthesis window, we
examine what class of linear operators can be represented as a
Gabor multiplier, and their relationship to the symbol of a
Gabor multiplier. The notion of a compatible window pair is
introduced, which includes Gaussians, as well as other
examples of smooth windows. A complete answer to the
representation question is obtained for compatible windows.
These results are used to justified numerical techniques in
seismic imaging that use Gabor multipliers to represent
nonstationary filters and wavefield extrapolators.
«
Back
Filter approximation by Wavelet
filter banks
Silong Peng, National Asia Design Engineering Center,
Chinese Academy of Science
In many applications, we need to construct signal adaptive
filters, especially adaptive wavelet filters. It mainly leads
to an optimal problem which is difficult to be solved. For
real time image processing, it is better to have a fast
algorithm. In this paper, we give a direct method to
approximate a given filter with wavelet filter banks.
Experiments show the excellent performance.
«
Back
Content based image data
retrieval and classification using nonnegative encoding
Robert Plemmons, Dept. of Maths & Computer Science, Wake
Forest University, USA
This study introduces novel clustering methodology for
image and sensor data retrieval, analysis and classification.
The methodology is based upon encoding the data using low rank
nonnegative matrix factorization algorithms to preserve
natural data nonnegativity and thus avoid subtractive basis
vector and encoding interactions present in methodologies such
as principal component analysis. Some existing nonnegative
matrix factorization techniques are reviewed and some new ones
are proposed. Numerical experiments are reported using iris
image data in biometric identification systems, phase screen
correlation data in atmospheric imaging applications, and
spectral data for identification and classification of orbital
space objects.
«
Back
General uncertainty principles for
Hilbert spaces
Say Song Goh, Dept. of Mathematics, National University of
Singapore
The classical Heisenberg uncertainty principle plays an
important role in timefrequency analysis of signals. This
inequality could be generalized and formulated under an
abstract mathematical setting. In this talk, we shall present
a general uncertainty inequality bounding the commutator of
two linear operators acting on a Hilbert space, where no
additional assumptions are made on the two operators. The
Heisenberg uncertainty principle and the periodic uncertainty
principle proposed by Breitenberger are special cases of this
general inequality. We shall also discuss about a
characterization of equality in the general inequality, and
asymptotic equality in the absence of equality for specific
examples. Another issue of interest that will be highlighted
is the description of geometric regions formed by quantities
appearing in the inequality when the two operators involved
are selfadjoint. Finally, we shall extend the general
uncertainty principle to handle the multivariate case and also
provide results on equality and asymptotic equality. Frequent
references to the settings of the Heisenberg uncertainty
principle and the periodic uncertainty principle will be made
throughout the talk as illustrations. This talk is based on
collaborations with Charles A. Micchelli and Tim N. T.
Goodman.
«
Back
Loop groups, conjugate quadrature
filters and wavelet approximation
Wayne M. Lawton, Dept. of Mathematics, National University
of Singapore
Orthonormal wavelet bases associated to a multiresolution
analysis are constructed from sequences c whose Fourier
transforms C correspond to matrices [C(z) zC(z); zC(z) C(z)]
that map the unit circle into the special unitary group. The
set of such functions, under pointwise multiplication, forms
an infinite dimensional loop group. Smooth wavelets arise from
C that admit divisors having the form (1 + z) while compactly
supported wavelets arise from C that are Laurent polynomials
and this subgroup is dense by a classical approximation result
that is based on Trotter’s formula. We extend this result to
show that every wavelet can be approximated, in appropriate
Sobelev Spaces, by a sequence of compactly supported wavelets.
We obtain this existence by combining methods and results from
the theory of loop groups with a classical result from
alegrabic topology and an extension of a recent approximation
theoretic result concerning Bezout identities.
«
Back
An integrated optical/digital
approach for improved image restoration
Robert J. Plemmons, Wake Forest University
A novel and successful optical/digital approach for
removing certain aberrations in optical imaging systems
involves placing an optical mask between an imagerecording
device and an object to encode the wave front phase before the
image is recorded, followed by digital image restoration to
decode the phase. We have shown that when appropriately
engineered, such an optical mask can also act as a form of
preconditioner for certain types of image restoration
algorithms. Moreover it can boost information in the signal
before it is recorded well above the noise level, leveraging
digital restorations of very high quality. In this talk, we 1)
examine the influence that a phase mask has on the incoming
signal and how it subsequently affects the performance of
restoration algorithms, and 2) explore the design of optical
masks, a difficult nonlinear optimization problem with
multiple design parameters, for removing certain aberrations
and for maximizing information content in recorded images. The
talk involves joint work with Paul Pauca, Sudhakar Prasad,
Todd Torgersen, and Joe van der Gracht, on Pupil Phase
Engineering.
«
Back
Preconditioning regularized least
squares problems arising from highresolution image
reconstruction from lowresolution frames
Furong Lin, Dept. of Mathematics, Shantou University, China
In this paper, we study the problem of reconstructing a
high resolution image from multiple undersampled, shifted,
degraded frames with subpixel displacement errors from
multisensors. Preconditioned conjugate gradient methods with
cosine transform based precondition ers and incomplete
factorization based preconditioners are applied to solve this
image reconstruction problem. Numerical examples are given to
demonstrate the effciency of these preconditioners. We find
that cosine transform based preconditioners are e®ective when
the num ber of shifted lowresolution frames are large, but
are less effective when the number is small. However,
incomplete factorization based preconditioners work quite well
independent of the number of shifted lowresolution frames.
«
Back
Wavelet algorithms for high
resolution image reconstruction
Zuowei Shen, Dept. Mathematics, National University of
Singapore
Highresolution image reconstruction refers to the
reconstruction of highresolution images from multiple
lowresolution, shifted, degraded samples of a true image. In
this talk, we give a method based on the wavelet analysis. By
expressing the true image as a function, we derive iterative
algorithms which recover the function completely from the
given lowresolution functions. These algorithms decompose the
function obtained from the previous iteration into different
frequency components in the wavelet transform domain via a
redundant or irredundant system and add them into the new
iterate to improve the approximation. We apply wavelet
(packet) thresholding methods to denoise the function obtained
in the previous step before adding it into the new iterate.
Our numerical results show that the reconstructed images from
our wavelet algorithms are better than that from the Tikhonov
leastsquares approach. Some other applications of this method
are also discussed.
«
Back
Scalable multimedia communication
over heterogeneous network
Fuchun Shang, Temasek Laboratories, Singapore
In heterogeneous network environment, the
hardware capabilities of network devices to consume multimedia
information are very different. For efficient utilization of
system resource, it is important to transmit the just enough
information that the receiving end can handle. Issues of
scalable media compression, media stream organization and
media quality negotiation and control in a multimedia
communication system that can provide different media quality
streams to different users at both codec and system level will
be presented.
«
Back
A numerical method for maximum
entropy deconvolution
JianGuo Liu, Dept. of Mathematics, University of Maryland
In this talk, I will discuss some numerical methods for
image deconvolution based on maximum entropy. Numerical issues
such as efficiency, reliability in designing of this kind of
the numerical methods will discussed. Comparison with other
kind deconvolution technologies will be presented by using
examples in image deconvolution in the surface physics.
«
Back
Research of the centre for
wavelets, approximation and information processing
Say Song Goh, Dept. of Mathematics, National University of
Singapore
The Centre for Wavelets, Approximation and Information
Processing of the National University of Singapore is a
research centre of the Faculty of Science in the Department of
Mathematics. It is a multidisciplinary research centre that
emphasizes the synergy of mathematics, engineering and
computer science. Its research activities are on mathematical
signal, image and information processing, as well as their
analysis and understanding. In this talk, we shall provide an
overview of the centre and highlight its various areas of
research. Some of the centre’s research capabilities include
image, document image and video compression, video
summarization, denoising and resolution enhancement,
underwater signal processing and recognition, and signal
identification and classification.
«
Back
Image restoration for historical
documents using directional wavelet transform
Tao Xia, CWAIP, National University of Singapore
In this talk we will introduce a novel algorithm to clean
up a large collection of historical handwritten documents. Due
to the seepage of ink over long period of storage, the front
page of each document has been severely marred by the reverse
side writing. Earlier attempts have been made to match both
sides of a page to identify the offending strokes to eliminate
them with the aid of a wavelet transform. Perfect matching,
however, is difficult due to document skews, differing
resolutions, inadvertently missing out reverse side and warped
pages during image capture. A new approach is proposed to
avoid double side mapping by using a directional wavelet
transform to distinguish the foreground and reverse side
strokes much better than the conventional wavelet transform.
Experiments have shown that the algorithm enhances the
readability of each document significantly without the mapping
with its reverse side.
«
Back
FOI/ROI video compression
Li Hui, CWAIP, National University of Singapore
In video compression techniques, framesofinterest (FOI)
and regionofinterest (ROI) could be addressed for more
attention. Thus, high priority is given to FOI or ROI by
allocating more bits than others. It can obtain higher
subjective video quality at the same rate, by sacrificing the
quality of the other parts. We developed new FOI/ROI
algorithms for video compression. Simulation results show
excellent performance.
«
Back
Multiwavelets, diamond search, and
multiscalable video codec platform
JoYew Tham, Centre for Industrial Mathematics, National
University of Singapore
The recent explosion in digital video storage and delivery
has presented strong motivations for high performance video
compression solutions. Coupling this with the growing
heterogeneity of user and network requirements, there arises a
pressing need for a flexible video compression framework that
can support a “generate once, scale many” concept. This talk
presents an overview of a highly scalable video compression
platform, covering the design and application of good
multiwavelet filters, a novel diamond search algorithm for
fast blockbased motion compensation, and a multiscalable
video codec architecture that supports simultaneous video
scalability in terms of bit rate, distortion, frame rate,
color, spatial resolution, and computational complexity within
a single compressed video bit stream.
«
Back
WaveletGalerkin method and natural
boundary integral equations
Wei Lin, Dept. of Mathematics, Zhongshan University
In this paper we apply the wavelet Galerkin method to
numerical solutions for Harmonic Equations, Biharmonic
Equations, Stokes Equations, Elasticity Equations and
Helmholtz Equations respectively. All of them are reduced to
the natural boundary integral equations with the
strongsingular kernel. Hermite wavelets, Shannon wavelets and
spline wavelets are utilized.
«
Back
Fingerprint recognition technology
Jufu Feng, Center for Information Science, National
Laboratory on Machine Perception, Peking University, China
In this talk, I will introduce the fingerprint history and
automated fingerprint identification system(AFIS). Ancien
people were aware of the individuality of fingerprint. From
the late sixteenth centurty to the late ninteenth century, a
lot of findings established the foundation of morden
fingerprint recognition. In the early 1960s, automated
fingerprint identification system(AFIS) was developed. Since
then, AFIS has benn used worldwide.
«
Back
Wavelet decompositions and
nonlinear approximations
Dung Dinh, Information Technology Institute, Vietnam
National University
In this talk, we will discuss the following questions:
 Nonlinear approximations of functions by sets of finite
cardinality and the classical entropy number measuring their
optimality,
 Nonlinear approximations of functions by sets of finite
pseudodimension and the related nonlinear widths measuring
their optimality, and
 Constructions and use of wavelet decompositions for
constructing asymptotically optimal methods for these
nonlinear approximation characterizations
«
Back
Tiles, selfaffine sets and scaling
functions
Ka Sing Lau, Dept. of Mathematics, The Chinese University
of Hong Kong
Scaling functions are used to construct wavelets. Such a
function is gererated by the subdivsion scheme. The most
studied case is for scaling 2 and with integral translations;
the multivariate case is more complicated. In this talk we
will consider the support of such functions which can be
considered, more generally, as the invariant sets of iterated
function systems with expanding integral dilation matrices and
integral translations; these are selfaffine sets. A
special case is the selfaffine tiles. We will discuss
the geometry of such sets and the related properties.
«
Back
A wavelet set construction and
tight frames
Songkiat Sumetkijakan, Chulalongkorn University, Thailand
Since wavelet sets were introduced in the first existence
proof of multidimensional single dyadic orthonormal wavelets
by Dai, Larson, and Speegle, many examples and a few general
constructions have been given. An intuitive wavelet set
construction is due to Benedetto and Leon. Starting from a set
K_0 and a map T with some specified properties, it is an
iterative construction of sets K_n that approach a wavelet
set. Though the scope of this construction has some
limitations, we extend it to a more general setting. In
particular, the construction can produce a multidimensional
Journ'e wavelet set, the wedding cake set, and a wavelet set
with only two connected components. In addition, each finite
iteration of the construction is proved to give rise to a
tight frame wavelet set.
«
Back
Wavelet frames: theory and
constructions
Zuowei Shen, Dept. Mathematics, National University of
Singapore
This talk is about wavelet frames. The redundant
representation offered by wavelet frames has already been put
to good use for signal denoising, and is currently explored
for image compression. We restrict our attention to wavelet
frames constructed via multiresolution analysis, because this
guarantees the existence of fast implementation algorithms. We
shall explore the `power of redundancy' to establish general
principles and specific algorithms for constructing framelets
and tight framelets. In particular, we shall give several
systematic constructions of spline and pseudospline tight
frames and symmetric biframes with short supports and high
approximation orders. Several explicit examples are discussed.
This talk is based on joint work with Ingrid Daubechies, Bin
Han and Amos Ron.
«
Back
Pointwise properties of bounded refinable function
vectors
DiRong Chen, Dept. of Applied Mathematics, Beijing
University of Aeronautics and Astronautics
Download/View PDF
«
Back
Eigenvalues and Biorthogonal
Eigensystems of scaling operators
Seng Luan Lee, Dept. of Mathematics, National University of
Singapore
Download/View PDF
«
Back
Approximation and interpolation by
radial basis function on Sobolev space
Jungho Yoon, Dept. of mathematics, Ewha W. University, S.
Korea
Approximation by translates of suitable radial basis
functions is an important approach towards solving the
(multivariate) scattered data problem. Its strengths are as
follows:
 it facilitates the evaluation of the approximant;
 the accuracy of approximation is usually very
satisfactory provided the approximand f is reasonably
smooth;
 there is enough flexibility in the choice of basis
functions.
In this talk, we discuss the approximation power of radial
basis function interpolation and approximation on Sobolev
space.
«
Back
An orthogonal splines approach to
periodic wavelet frames
Say Song Goh, Dept. of Mathematics, National University of
Singapore
Wavelet frames are of much current interest in wavelet
research. They contain redundancy and could provide more
flexibility in some applications. In this talk, we shall
present an approach based on orthogonal splines for the study
of frame multiresolution analyses and wavelet frames of
periodic functions. Results on frames for shiftinvariant
subspaces of periodic functions will be obtained in terms of
orthogonal splines. We shall also discuss about equivalent
conditions for the existence of a wavelet frame starting from
a frame multiresolution analysis, and an algorithmic
construction when such a wavelet frame exists. Trigonometric
polynomial frames will be used to illustrate the ideas
presented.
«
Back
Multiwavelets associated with
countable groups of unitary operators in Hilbert spaces
Wai Shing Tang, Dept. of Mathematics, National University
of Singapore
We give a characterization of biorthogonality among Riesz
multiwavelets in terms of certain invariant properties of
their associated core spaces. A large family of nonbiorthogonal
Riesz multiwavelets is exhibited. We also discuss some results
on linear perturbation of orthonormal multiwavelets. This is
joint work with David Larson and Eric Weber.
«
Back
Singularity and Instantaneous
Frequency Detection from Modulus Maxima of Continuous Wavelet
Transforms
WenLiang Hwang, Institute of Information Sciences Academia
Sinica
Many problems in signal processing, image processing, and
pattern recognitions use wavelet transform to detection local
features. Among them, the most important features may be
singularity and instantaneous frequency. Singularity measures
the local irregularity of a signal and instantaneous frequency
measures the local dominant variations. Singularity can be
detected from the modulus maxima of a realvalued wavelet,
while the instantaneous frequency can be obtained from that of
complexvalued wavelets. We proof that isolated singularity
can be detected from modulus maxima of complexvalued
wavelets. Thus, using complexvalued wavelet transform, one is
about to simultaneously detection isolated singularity and
instantaneous frequency from its modulus. We will also show
applications in which shape are derived from texture
variations measured by instantaneous frequencies in textured
images.
«
Back
What's Math got to do with it ?
Mathematics at the frontiers of sciences and technology
Tony Chan, Department of Mathematics, University of
California, USA
Mathematics is probably the least understood (and
appreciated) among the hard sciences by the general public.
Its public image is probably a mixture of inscrutability, fear
and irrelevance to everyday life. And this despite that most
adults have been exposed to many years of mathematics during
their education. In fact, mathematics is at the foundation of
our highly technological society. It is at the core of much of
the current information technology revolution and many predict
that it will play an equally important role in the burgeoning
biomedical revolution ushered in by the advent of modern
genetics.
So why is Mathematics having such a hard time getting the
message across to the public? The cynics will say it is
because of the abysmal and often traumatic experience many
people experienced in mathematics classes in school. It is
true that the rigor, the abstraction, and the scope of
mathematics is quite challenging for many students. And much
of frontier research mathematics is close to impossible for
outsiders to understand. However, the APPLICATION of
mathematics can be found in almost all walks of life, and
often in the most unexpected places. My conviction is that if
the public is more aware of the "usefulness" and "ubiquity" of
mathematics, not only will the public image of mathematics
improve, but it'll also provide the public with the necessary
understanding to make informed political decisions regarding
how society should invest in its technological future.
In this talk, I'll provide some examples of interesting
applications of frontier research mathematics in areas that
are quite close to everyday life. Examples include the movies,
the stock market, the internet, medicine, communication, etc.
«
Back
Mathematics in the real world and
the fake world
Stanley Osher, University of California, Los Angeles, USA
With the advent of the computer we can now develop
algorithms that perform incredible tasksspecial effects in
Hollywood, catching bad guys on video, predicting all kinds of
natural and unnatural phenomena. A common theme in these
algorithms is rather elementary geometry. In this talk I'll
discuss geometric algorithms and their applications in every
day life.
«
Back
Impulse noise removal by mediantype
noise detector and edgepreserving regularization
ChungWa Ho, Department of Mathematics, The Chinese
University of Hong Kong
This paper proposes a twophase scheme for impulsive noise
removal. In the first phase, an adaptive median filter is used
to identify the noise candidates. In the second phase, the
image is restored by an edgepreserving regularization method
that applies only to the noise candidates. In terms of edge
preserving and noise suppression, our restored images are
significantly better than those restored by using just
nonlinear filters or edgepreserving regularization only.
«
Back
The numerical algorithm for an
inverse problem in optical mammography
Jerry YinTzer Shih, Department of Medical Imaging
Technology, Chung Shan Medical University
We discuss the implementation of a finite element algorithm
with C1 Elements for solving a coupled system of generalized
biharmonic type equations related to a certain inverse
problem. We consider both direct and preconditioned conjugate
gradient methods for rapid solution of the resulting rather
large linear system and our development of an efficient
preconditioner.
«
Back
Superresolution image restoration
from blurred lowresolution images
Andy Chin Ko Yau, Department of Mathematics, The Chinese
University of Hong Kong
Download/View PDF
«
Back
Latest results in image
superresolution
Nirmal Bose, Pennsylvania State University, USA
The wavelet superresolution algorithm proposed by Nguyen et
al in 2000 ignored the effects of using different choices of
mother wavelets and scaling functions. In their work, only the
Daubechies wavelet and scaling function were applied and
tested. Here, a detailed comparison of the desirable
properties in superresolution of several finitely supported
scaling function and wavelet families is given. Second,
adaptation to nonsmooth domains, irregularly sampled data, and
weighted measures is considered. Third, a comparison in
performance, features, and constraints of several
superresolution algorithms that have been popularized during
the last decade is provided. Finally, problems to be overcome
are summarized.
«
Back
Non smooth costfunctions in
image and signal reconstruction
Mila Nikolova, Centre de Mathematiques et de Leurs
Applications CMLA (CNRS UMR 8536) ENS de Cachan, France
We consdier the recovery of an image or a signal x*, based
on data y, by minimizing a cost function of the form F(x,y)=D(x,y)+c
R(x), where D is a datafidelity term, R is a regularization
term and c>0 is a parameter. Examples of applications are
image restoration, segmentation, motion estimation, color
reproduction, optical imaging, tomography, seismic and nuclear
imaging. Classical ways to construct costfunctions F are
based on calculus of variations or on probabilistic
considerations. In the past, different heuristics have been
adapted to specify costfunctions for different applications.
Instead, we analyze the properties of minimizers x* as an
implicit function of both the data y and the shape of the
function F. In this talk, we focus on the smoothness of the
cost function F. Points of interest are the recovery of
homogeneous regions and edges, and the processing of outliers
and impulse noise.
(a) Nonsmooth regularization to recover strongly
homogeneous regions
Regularization R accounts for priors on x such as
smoothness and other properties. Usually R is the sum of terms
of the form f(g_i x) where g_i are linear operators which
yield the differences between neighboring samples in x and f
is a potential function. We show that typically, the local
minimizers x* of F(.,y) satisfy g_i x*=0 for numerous indexes
i. So, if g_i yield firstorder differences, x* involves
constant regions. This explains the staircasing effect
observed in total variation methods. If g_i yield secondorder
differences, x* involves planar regions, etc. We call such
regions strongly homogeneous. This is a striking property,
both visually and mathematically. In comparison, such a
behavior does not occur if R is smooth.
(b) Nonsmooth datafidelity to satisfy exactly a certin
number of the data entries
Datafidelity D is generally a smooth function. Nonsmooth
datafidelity terms are avoided in image processing. We focus
on functions D combining terms of the form d(a_i xy_i) where
a_i are linear operators modelling the data acquisition, and d
is a function which is nonsmooth at zero. We show that
typically, the local minimizers x* of F(.,y) fit exactly a
certain number of the data entries: there are many indexes i
for which a_i x*=y_i. This is a strong mathematical property
which produces a surprising visual effect. Based on it, we
construct costfunctions allowing outliers and impulse noise
to be detected and selectively smoothed.
All theoretical results are illustrated using various
numerical experiments in the context of practical image and
signal reconstruction problems.
«
Back
Analysis of halfquadratic
minimization methods for signal and image recovery
Mila Nikolova, Centre de Mathematiques et de Leurs
Applications CMLA (CNRS UMR 8536) ENS de Cachan, France
We address the minimization of regularized convex cost
functions which are customarily used for edgepreserving
restoration and reconstruction of signals and images. In order
to accelerate computation, multiplicative and additive
halfquadratic reformulation of the original costfunction
have been pioneered in Geman & Reynolds (1992) and Geman &
Yang (1995). The alternate minimization of the resultant
(augmented) costfunctions has a simple explicit form. Our
goal is to provide a systematic analysis of the convergence
rate achieved by each one of these methods. We show that the
number of iterations required for convergence is quite smaller
for the multiplicative form than for the additive form.
However, the computational cost of each iteration is much
higher for the multiplicative form than for the additive form.
The global assessment is that minimization using the additive
form of halfquadratic regularization is faster than using the
multiplicative form. When the additive form is applicable, it
is hence recommended. Extensive experiments demonstrate that
both methods are substantially faster than the standard Matlab
routines. (joint work with Michael Ng, University of Hong
Kong)
«
Back
Structured matrix computation and
image recovery applications
Michael Ng, Dept. of Mathematics, University of Hong Kong
In this talk, we address some structured matrix computation
issues related to image recovery applications:
 the multiplicative and the additive forms in the
minimization of regularized convex cost functions which are
customarily used for edgepreserving restoration and
reconstruction of signals and images;
 the factorized banded inverse preconditioners for
matrices with Toeplitz structure arising in nonlinear image
restoration applications.
 the preconditioning regularized least squares problems
arising from highresolution image reconstruction from
lowresolution frames.
«
Back
Data hiding  theory and
applications
Pierre Moulin, Dept. of Electrical and Computer Engineering
University of Illinois at UrbanaChampaign
Nasir Memon, Computer Science Dept., Polytechnic University,
USA
In the first part of the tutorial we will look at
applications of data hiding. Specifically, we look at what
mechanisms could be used to securely realize these
applications. We will examine the following four applications
(50 minute sessions each):
 Ownership assertion: Basic framework. Invertibility
attack and copy attack. Countering invertibility attack.
Protocol for secure ownership assertion. Zero knowledge
protocols.
 Authentication: Basic framework. Attacks. Key management
problem. Distortion bounded authentication.
 Steganography: Review of techniques. Steganalysis
techniques. Universal steganalysis. Notions of steganography
capacity and security. Discussion on the future of
steganography.
 Copy protection: Basic framework. Attacks. Review of
data hiding based copy protection mechanisms in consumer
appliances and discussion of their security.
In the second part of this tutorial we will present some
recent approaches to modeling watermarking and data hiding as
communication problems. We first overview the watermarking
problem (low payload) and show how it can be formulated as a
problem of shaping signal statistics. We further describe how
several popular watermarking techniques fit into this model
and describe their performance. We also review the data hiding
problem (high payload) and reduce it to a channel coding
problem with side information. The unifying theme is that
watermarking algorithms can (should) be based on fundamentals
of detection theory, estimation theory, game theory, and
information theory in general. The second part will be
organized into five 50 minute sessions as follows:
 Early Work: spreadspectrum techniques and LSB
techniques
 Binning schemes; modern quantizationbased codes: ideas
and basic constructions
 Performance analysis for watermarking systems: error
probabilities
 Data Hiding: relation to communication with side
information, capacity analysis, random binning, application
to images.
 Advanced topics: desynchronization attacks, sensitivity
attacks, steganography
«
Back
Wavelet algorithms for
highresolution image reconstruction
Raymond Chan, Dept. of Mathematics, Chinese University
of Hong Kong
Highresolution image reconstruction refers to the
reconstruction of highresolution images from multiple
lowresolution, shifted, degraded samples of a true image. In
this talk, we analyze this problem from the wavelet point of
view. By expressing the true image as a function in L_{2}(
R^{2}), we derive iterative algorithms which
recover the function completely in the L_{2}
sense from the given lowresolution functions. These
algorithms decompose the function obtained from the previous
iteration into different frequency components in the wavelet
transform domain and add them into the new iterate to improve
the approximation. We apply wavelet (packet) thresholding
methods to denoise the function obtained in the previous step
before adding it into the new iterate. Our numerical results
show that the reconstructed images from our wavelet algorithms
are better than that from the Tikhonov leastsquares approach.
«
Back
Algorithms for association rules
Markus Hegland Mathematical Sciences Institute, Australian
National University
Association rules are "ifthen rules" with two measures
which quantify the support and confidence of the rule for a
given data set. Having their origin in market basked analysis,
association rules are now one of the most popular tools in
data mining. This popularity is to a large part due to the
availability of efficient algorithms following from the
development of the Apriori algorithm.
In these lectures we will introduce basic concepts of
association rule discovery including support, confidence,
interestingness, the apriori property, and hierarchical and
quantitative association rules. The core of the lectures
consists of a review of the most important algorithms for
association rule discovery. Some familiarity with concepts
like predicates, probability, expectation and random variables
is assumed.
«
Back
Domain decomposition methods for
image processing
ShiuHong Liu, Dept. of Mathematics, University of
Manitoba, Canada
We investigate the application of domain decomposition
methods for nonlinear parabolic PDEs arising from a
combination of schemes to denoise digital images. We initially
apply the mean curvature flow equation to annihilate salt and
pepper noise. Then we apply a PeronaMalik type scheme to
remove additive Gaussian noise. At each time step, we employ
the Schwarz alternating method to solve a linear system of
equations. Some numerical examples illustrate the efficiency
of the method.
«
Back
Wavelet adaptation for hybrid
waveletlarge margin classifiers
Julia Neumann, Dept. of Mathematics & Computer Science,
University of Mannheim, Germany
Hybrid waveletlarge margin classifiers have recently
proven to solve difficult signal classification problems in
cases where merely using a large margin classifier like, e.g.,
the Support Vector Machine may fail. In our case, we
investigate the classification of electrophysiological
signals. The features for our hybrid classifier are selected
from the outputs of all orthonormal filter banks of fixed
length with respect to criteria measuring class separability
and generalisation error.
We evaluate a range of such adaptation criteria to perform
feature selection for hybrid wavelet – large margin
classifiers. The two main points we focus on are (i)
approximation of the radiusmargin error bound as the ultimate
criterion for the target classifier, and (ii) computational
costs of the approximating criterion for feature selection
relative to those for the classifier design.
We show that by virtue of the adaptivity of the filter
bank, criteria which are more efficient than computing the
radiusmargin are sufficient for wavelet adaptation and,
hence, feature selection.
Another important issue for the extraction of signal
features based on wavelet filtering is translation invariance.
In order to provide a library of decimated shift insensitive
perfect reconstruction wavelet transforms, we propose the use
of complex wavelets defined in the frequency domain. These
enable us to select shift insensitive wavelet features for the
classifi cation problem at hand.
«
Back
Edge and ramp preserving noise
removal based on nonlinear diffusion with Gauss curvature
conductance
SukHo Lee, Dept. of Mathematics, Yonsei University,
Korea
In this paper we propose the gauss curvature as the
conductance of the diffusion equation for noise removal. The
motivation for using the gauss curvature is that it goes to
zero if any one of the principle curvatures goes to zero. Thus
it has the property of preserving curved edges, ramps,
features with small gradient magnitude and other smallscaled
features, which other noise removal scheme such as the total
variation based noise removal, mean curvature evolution,
curvature based evolution fail to preserve. The properties and
performance of the proposed scheme will be given theoretically
and experimentally.
«
Back
Introduction to digital watermarking
EeChien Chang, Dept. of Computer Science, National
University of Singapore
Mohan Kankanhalli, Dept. of Computer Science, National
University of Singapore
This tutorial provides the background information necessary
to understand the applications and problems of watermarking
and datahiding.
«
Back
Contentdependent filter (CDF)
design and analysis: CDF design and analysis based on the
anisotropic diffusion model
Charles Chui, Stanford University and University of
Missouri  St. Louis, USA
Several limitations of numerical solution of the
anisotropic diffusion (AD) PDE as well as the bilateral filter
will be discussed. To address these limitations, the notion of
"contentdependent filtering" (CDF) is introduced in the first
presentation. Both design criteria and theoretical analysis of
these innovative filters will be addressed. The second
presentation will focus on technical details. It will be clear
that with appropriate modifications, CDFs also apply to image
enhancement and image segmentation.
«
Back
Contentdependent filter (CDF)
design and analysis: Design of CDFs for image noise reduction
Jianzhong Wang, Mathematics, Computer Science and
Statistics, Sam Houston State University
Several limitations of numerical solution of the
anisotropic diffusion (AD) PDE as well as the bilateral filter
will be discussed. To address these limitations, the notion of
"contentdependent filtering" (CDF) is introduced in the first
presentation. Both design criteria and theoretical analysis of
these innovative filters will be addressed. The second
presentation will focus on technical details. It will be clear
that with appropriate modifications, CDFs also apply to image
enhancement and image segmentation.
«
Back
Fast fourthorder timestepping
for stiff PDE
Nick Trefethen, Oxford University
Many partial differential equations combine higherorder
linear terms with lowerorder nonlinear terms. Examples
include the KdV, AllenCahn, Burgers, and KuramotoSivashinsky
equations. High accuracy is typically needed because the
solutions may be highly sensitive to small perturbations. For
simulations in simple geometries, spectral discretization in
space is excellent, but what about the time discretization?
Too often, secondorder methods are used because higher order
seems impractical. In fact, fourthorder methods are entirely
practical for such problems, and we present a comparison of
the competing methods of linearly implicit schemes, split step
schemes, integrating factors, "sliders", and ETD or
exponential time differencing. In joint work with AK Kassam
we have found that a fourthorder RungeKutta scheme known as
ETDRK4, developed by Cox and Matthews, performs impressively
if its coefficients are stably computed by means of contour
integrals in the complex plane. Online examples show that
accurate solutions of challenging nonlinear PDE can be
computed by a 30line Matlab code in less than a second of
laptop time. For reactiondiffusion equation in 2D or 3D, we
can often beat standard finitedifference methods by a factor
of 10 or more in speed.
«
Back
Sparse grids and the analysis of
high dimensional large scale data
Markus Hegland, Mathematical Sciences Institute, Australian
National University
The search for interesting variable stars, the discovery of
relations between geomorphological properties, satellite
observations and mineral concentrations, and the analysis of
biological networks all require the solution of a large number
of complex learning problems with large amounts of data. A
major computational challenge faced in these investigations is
posed by the curse of dimensionality. A well known aspect of
this curse is the exponential dependence of the size of
regular grids on the dimension of the domain. This makes
traditional finite element approaches infeasible for
highdimensional domains. It is less known that this curse
also affects computations of radial basis function
approximations  in a slightly more subtle way.
Sparse grid functions can deal with the major problems of
the curse of dimensionality. As they are the superposition of
traditional finite element spaces, many wellknown algorithms
can be generalised to the sparse grid context. Sparse grids
have been successfully used to solve partial differential
equations in the past and, more recently, have been shown to
be competitive for learning problems as well. The talk will
provide a general introduction to the major properties of
sparse grids and will discuss connections with kernel based
methods and parallel learning algorithms. It will conclude
with a short review over some recent work on algorithms based
on the combination technique.
References:
 Additive Sparse Grid Fitting, M. Hegland, Curve and
Surface Fitting: SaintMalo 2002, pp. 209219,Nashboro
Press, Brentwood, 2003.
 Adaptive sparse grids, M. Hegland, Proceedings of 10th
Computational Techniques and Applications Conference
CTAC2001, ANZIAM Journal, vol. 44, C335C353, Available
online.
(for other references see http://datamining.anu.edu.au)
«
Back
Relations between nonlinear
denoising methods in signal and image processing
Gabriele Steidl, Fakultaet fuer Mathematik und Informatik,
Universitaet Mannheim
Wavelet shrinkage, nonlinear diffusion filtering, and
nonquadratic regularization methods are three useful
techniques for edgepreserving denoising of signals and
images.
In the first part of the talk we analyze relations between
the important 1D prototypes soft Haar wavelet shrinkage,
spacediscrete total variation (TV) diffusion and discrete
total variation (TV) regularization: we prove that
spacediscrete TV diffusion and TV regularization are
identical. We show that applying translationally invariant
soft Haar wavelet shrinkage on a single scale can be regarded
as an absolutely stable explicit discretisation of TV
diffusion. Afterwards we show that wavelet shrinkage on
multiple scales can be regarded as a single step diffusion
filtering of the Laplacian pyramid of the signal.
In the second part of the talk we study general relations
between arbitrary shrinkage functions and diffusivities: we
show that one step of a (stabilized) explicit discretisation
of nonlinear diffusion can be expressed in terms of wavelet
shrinkage on a single spatial level. This equivalence allows a
fruitful exchange of ideas between the two fields. For
example. we derive new wavelet shrinkage functions from
existing diffusivitv functions, and identify some previously
used shrinkage functions as corresponding to well known
diffusivities. Moreover, by transferring stability notions
from diffusion filtering to wavelet shrinkage, we derive
conditions on the shrinkage function that ensure that shift
invariant singlelevel Haar wavelet shrinkage is
maximumminimum stable. monotonicity preserving, and variation
diminishing.
Finally, we present some novel diffusioninspired wavelet
shrinkage methods with improved rotation invariance in 2D.
«
Back
Underwater acoustic signal
classification
Zhenyong Lin, Temasek Laboratories
we applied a fast adaptive shorttime Fourier transform
algorithm for underwater whale song signal representation, and
extract first and second order statistics from its
timefrequency representation as features for underwater
Humped Back whale call classification. After the optimisation
of parameters for classification, we have achieve excellent
results.
«
Back
Bayesian resolution enhancement of
compressed video
Aggelos Katsaggelos, Electrical and Computer Engineering,
Northwestern University, USA
Superresolution algorithms recover highfrequency
information from a sequence of lowresolution observations. In
this presentation we consider some recent results on the
impact of video compression on the superresolution task.
Hybrid motioncompensation and transform coding schemes are
the focus, as these methods provide observations of the
underlying displacement values as well as a variable noise
process. We utilize the Bayesian framework to incorporate this
information and fuse the superresolution and postprocessing
problems. A tractable solution is defined, and the
relationship between algorithm parameters and information in
the compressed bitstream is established. The association
between resolution recovery and compression ratio is also
explored. Simulations illustrate the performance of the
procedure with both synthetic and nonsynthetic sequences.
«
Back
Challenges and new solutions
predicting Tsunami waves
M. M. Lavrentiev, Jr., Sobolev Institute for Mathematics SD
RAS, Novosibirsk, Russia
In this paper the problem of tsunamigenic earthquake is
concerned. Perhaps, only on the basis of multimethod approach
the reasonable forecast of tsunami danger will be available.
Among the possible methods to be used we list determination of
initial displacement at source solving the corresponding
inverse problem (A. Avdeev et. al., 2001, F. Gonzalez et. al,
2003), neural network analysis of the time series.
Pacific Marine Environmental Laboratory (PMEL) has
developed the Deepocean Assessment and Reporting Tsunami
(DART) system and has deployed it at six locations around the
Pacific. DART systems obtain highquality data of tsunami
amplitudes in the open ocean. This data can be used to
reconstruct the true tsunami source parameters. To achieve
that goal, PMEL has created a database of precalculated time
series of tsunami waves from (unit) sources at subduction
zones around Pacific. The solutions from the database can be
combined using developed Web interface (called FACTS –
Facility for the Analysis and Comparison of Tsunami
Simulation), to reconstruct tsunami sources recorded by DART.
This approach will be used in the next generation of the
realtime tsunami forecasting system, which is under
development by the National Tsunami Hazard Mitigation Program
(NTHMP).
In order to reconstruct tsunami sources with in automatic
mode a special algorithm and a software application has been
developed. This module is designed to determine the tsunami
source parameters by processing the marigrams, obtained at the
DARTS stations. The module effectively determines the
amplification coefficients for unit sources, which makes
possible to approximate the shape of vertical displacement of
the sea surface at the tsunami source area. The algorithm is
based on minimization of the calculated difference between the
measured marigram(s) and the linear combination of synthetic
marigrams, taken from the FACTS database.
This inversion algorithm has been numerically tested with
the historical data for the 1996 Andreanov tsunami. The module
has been implemented into the FACTS database.
In order to improve the quality of source parameters
determination alternative methods have been studied. First,
the Inverse Problems approach has been realized. After 1D
numerical tests, the cost effective 2D algorithm has been
developed and preliminary tested against synthetic and
historical data. Finally, the neural network based algorithm
has been designed.
Reference
 Avdeev A. and et. all. Complex analysis of ocean tsunami
observation data for solution of the inverse problem //
Proc. of the Intern. Tsun. Symp.  USA, Seattle, 2001.  P.
795808.
 Gonzalez F.I., Titov V.V., Avdeev A.V., Bezhaev A.Yu.,
Lavrentiev M.M.(jr), Marchuk An.G. RealTime Tsunami
Forecasting: Challenges and Solutions. Proc. International
Conference Mathematical Methods in Geophysics2003,
Novosibirsk, ICM&MG Publisher, 2003, pp. 225228.
«
Back
High order numerical methods for
time dependent HamiltonJacobi equations
ChiWang Shu, Division of Applied Mathematics, Brown
University, USA
In these four lectures we will review high order accurate
numerical methods for solving time dependent HamiltonJacobi
equations. We will start with a brief introduction of the
HamiltonJacobi equations, the appearence of singularities as
discontinuities in the derivatives of their solutions hence
the necessity to introduce the concept of viscosity solutions,
and first order monotone type numerical schemes on structured
and unstructured meshes to approximate such viscosity
solutions, which can be proven convergent with error
estimates. We then move on to discuss high order accurate
methods which are based on the first order monotone schemes as
building blocks. We will describe the Essentially
NonOscillatory (ENO) and Weighted Essentially NonOscillatory
(WENO) schemes for structured meshes, and the discontinuous
Galerkin (DG) and WENO schemes for unstructured meshes.
Lecture One: Introduction to HamiltonJacobi Equations,
Viscosity Solutions and First Order Monotone Schemes on
Structured Meshes
Lecture Two: First Order Monotone Schemes on Unstructured
meshes. High Order ENO Schemes on Structured Meshes. High
Order TVD Time Discretization
Lecture Three: High Order WENO Schemes on Structured
Meshes. High Order WENO Schemes on Unstructured Meshes
Lecture Four: Introduction to Discontinuous Galerkin
Method. High Order Discontinuous Galerkin Method on
Unstructured Meshes
«
Back
High order discontinuous Galerkin
finite element method for solving time dependent HamiltonJacobi
equations
ChiWang Shu, Division of Applied Mathematics, Brown
University, USA
In this talk we first describe the joint work with
Changqing Hu on designing high order RungeKutta discontinuous
Galerkin method for solving time dependent HamiltonJacobi (HJ)
equations. This method is based on the RungeKutta
discontinuous Galerkin finite element method for solving
conservation laws. The method has the flexibility of treating
complicated geometry by using arbitrary triangulation, can
achieve high order accuracy with a local, compact stencil, and
are suited for efficient parallel implementation. The finite
element solution approximates the solution of the HJ equation
as piecewise polynomials, however the evolution of these
piecewise polynomials follows the discontinuous Galerkin
framework of the conservation law systems that the gradient of
the solution to the HJ equation satisfies. In the original
method of Hu and Shu, a least square procedure is performed to
obtain a single polynomial approximating the solution of the
HJ equation after two polynomials to its gradients are
obtained. Recently, in a joint work with Fengyan Li, we
utilize a smaller locally divergencefree discontinuous
Galerkin space to bypass the need of this leastsquare
procedure, which reduces both the computational cost and the
storage requirement. One and two dimensional numerical
examples are given to illustrate the capability of the method.
«
Back
Unitary extension principle and
applications
Zuowei Shen, Dept. of Mathematics, National University of
Singapore
The unitary extension principle makes the construction of
tight frame wavelet painless. This gives a great flexibility
to design tight frame wavelet filters according to the
problems in hands. In this talk, I will briefly review the
unitary extension principle and illustrate the power of this
principle by showing how it is used to design filters for
various high resolution image reconstruction problem.
«
Back
Impulse noise removal by
mediantype noise detector and edgepreserving regularization
Raymond Chan, Dept. of Mathematics, Chinese University
of Hong Kong
This paper proposes a twophase scheme for impulse noise
removal. In the first phase, an adaptive median filter is used
to identify pixels which are likely to be contaminated by
noise (noise candidates). In the second phase, the image is
restored using a specialized regularization method that
applies only to those selected noise candidates. In terms of
edge preservation and noise suppression, our restored images
show a significant improvement compared to those restored by
using just nonlinear filters or regularization methods only.
Joint work with ChungWa Ho, Chen Hu, and Mila Nikolova.
«
Back
Tutorial on wavelets and PDEbased
image processing
Tony Chan, Dept. of Mathematics, University of California,
Los Angeles and Jianhong (Jackie) Shen, School of Mathematics,
University of Minnesota
Lecture One: A Friendly Tour of Mathematical Image & Vision
Analysis (by Jianhong Shen)
This tour explains the vast background, important
applications, essential ingredients, and profound connections
of Mathematical Image & Vision Analysis in contemporary
sciences and technologies. Highlighted are mathematical views
of machine and human vision, and various modern mathematical
tools in image analysis and processing, including
regularization and inverse problems, variational optimization,
and geometric and nonlinear PDEs. More details are to come in
Tony Chan's tutorials on Thursday.
Lecture Two: Wavelets: Motivation, Construction, and
Application (by Jianhong Shen)
Complex information and patterns are built upon simple
basic elements, just as the universe is upon basic particles
(e.g., quarks and atoms). Wavelets are such a class of
building blocks. Across time, space, and scales, like systems
of well organized sensors, wavelets efficiently capture and
encode multiscale temporal and spatial information and
patterns. This tutorial introduces the motivation,
construction, implementation, as well as main applications of
wavelets in image and signal analysis and processing.
Lecture Three: Total variation based image restoration and
inpainting (by Tony Chan)
Total variation based methods have become one of the most
popular and powerful PDEbased image processing methodology.
Its attractiveness stems from its ability to locate and
preserve discontinuities ("edges"). There is also a
fundamental connection to a geometric view of images. In this
tutorial, we will give an introduction to this methodology,
from basic mathematical properties to efficient computational
algorithms. We will also give some applications to image
denoising and image inpainting.
Lecture Four: Variational image segmentation using level
sets (by Tony Chan)
Image segmentation is a fundamental task in image
processing and there have been numerous algorithms proposed
for it. Over the last 15 years or so, a powerful class of
methods have been proposed which is based on a variational
formulation in which the desired segmentation is defined as a
local minimizer of an energy functional which is constructed
to measure the balance between fidelity to the image and a
regular (smooth) representation. The most famous of these is
the MumfordShah model. In the implementation of these models,
a representation of the segments or their boundaries must be
chosen so as to allow a robust and efficient algorithm. The
level set methodology has proven to be a very powerful and
versatile tool for this task. Its attractiveness stems from
its ability to allow automatic merging and pinchingoff of
segments during the computation, as well as the ability to
capture the segment boundaries on a uniform grid instead of
tracking them explicitly. In this tutorial, we will give an
introduction to this class of methods.
«
Back
Periodic wavelet frames for
processing bandlimited signals
Say Song Goh, Dept. of Mathematics, National University of
Singapore
Bandlimited signals are of interest in various
applications. In this talk, we shall discuss about the use of
periodic wavelet frames in processing such signals. Results on
frame multiresolution analyses and wavelet frames of periodic
functions, which provide the theoretical foundation of the
approach, will be presented. We shall also see how the
approach is realized through its corresponding frequencybased
decomposition and reconstruction algorithms. As an
illustration, experimental results on using trigonometric
polynomial frames to separate shipping noise from snapping
shrimp noise and pink noise will be shown.
«
Back
Computational challenges in
electron microscopy of macromolecules
Esmond G. Ng, Lawrence Berkeley National Laboratory
Threedimensional (3D) density maps of biological
molecules can be determined by merging a large number of
twodimensional (2D) images taken by a lowdose electron
microscope. The 2D images are projections of a collection of
randomly oriented 3D molecules embedded in a thin layer of
vitreous ice.
In order to recover the 3D structure, one must first
determine the relative orientations of projection images. Once
the orientation parameters are obtained, the 3D
reconstruction can be carried out in the spatial domain by
solving a largescale linear least squares (LLS) problem
iteratively. Because the angular coverage by 2D projection
images is limited, and because the projections are typically
noisy, the LLS problem is illposed.
The noisy nature of the 2D projections makes it difficult
to determine orientation parameters accurately in advance. As
a result, an iterative procedure that simultaneously refines
the orientation parameters and the 3D structure of the
molecule is often used to render the final structure of
macromolecule.
In this talk, we will give an overview on the mathematical
formulation of the image reconstruction problem, and discuss
numerical techniques that can be employed to solve this
problem. In particular, we will point out the important roles
numerical linear algebra play in producing a high resolution
3D images of biological macromolecules.
«
Back
Weberized MumfordShah model with
BoseEinstein photon noise: Light adapted segmentation
inspired by vision psychology, retinal physiology, and quantum
statistics
Jianhong (Jackie) Shen, School of Mathematics, University
of Minnesota
Human vision works equally well in a large dynamic range of
light intensities, from only a few photons to typical midday
sunlight. Contributing to such remarkable flexibility is a
famous law in perceptual (both visual and aural) psychology
and psychophysics known as Weber's Law. There has been
a great deal of efforts in mathematical biology to model and
interpret the law in the cellular level as well as in terms of
linear (with feedbacks) and nonlinear biological systems (for
the two retinas). In terms of image and vision analysis, it is
the current author who has first emphasized the significance
of the law in faithfully modelling both human and computer
vision, and attempted to integrate it into important visual
processors such as image denoising (Physica D, 175,
pp. 241251, 2003).
The current work develops and studies a new segmentation
model based on the integration of both Weber's Law and the
celebrated MumfordShah segmentation model (Comm. Pure
Applied Math., 42, pp. 577685, 1989). Explained in
details are issues like why the classical MumfordShah model
lacks light adaptivity, and why its "weberized" version
proposed here more faithfully reflects human vision's superior
capability in segmenting retinal images in a variety of
lighting environments from dawn to dusk. It is also claimed
that the popular Gaussian noise model is inappropriate for the
weberized MumfordShah prior, and instead, the intrinsic
thermal noise of photon ensembles is introduced based on
BoseEinstein's distributions in Quantum Statistics, which
turns out to be compatible with the weberization idea both
analytically and computationally.
The current work also focuses on both the theory and
computation of the weberized MumfordShah model with
BoseEinstein noise. In particular, AmbrosioTortorelli's
Γconvergence approximation theory
is adapted (Boll. Un. Mat. Ital., 6B, pp.
105123,1992), and its stable numerical algorithms are
developed for the associated pair of nonlinear EulerLagrange
PDEs. Comparison with the classical MumfordShah model is made
through typical numerical examples.
This is a joint work freshly completed with my Ph.D.
student JoonMo Jung.
«
Back
Fast summation at nonequispaced
knots by NFFTs
Daniel Potts, Institut fur Mathematik, Medizinische
Universit at Lubeck, Germany
Download/View PDF
«
Back
A fast sweeping method with
application in image processing
Hongkai Zhao, Dept. of Mathematics, University of
California
In this talk I will present a fast sweeping method for
computing the numerical solutions for a class of Hamilton
Jacobi equations. The method is an iterative method which uses
upwind difference for discretization and uses GaussSeidel
iterations with alternating sweeping ordering to solve the the
discretized system. The method is extremely simple to
implement in any number of dimensions. Convergence and error
estimates of the algorithm will be discussed. Applications to
finding distance on manifolds and image reconstruction will be
presented.
«
Back
Using geometry and iterated
refinement for inverse problems: Total variation image
restoration
Stanley Osher, University of California, Los Angeles, USA
Total Variation based regularization and image restoration
was developed by RudinOsherFatemi in the late 80's.
Recently, Yves Meyer characterized textures as elements of the
dual of BV and did some extremely interesting analysis on the
original ROF model. This led to practical algorithms to
decompose images into structure plus texture. Very promising
results involving processing image gradients simultaneously
with images were obtained by LysakerOsherTai, based on
earlier work on processing surfaces by
TasdizenWhitakerBurchardOsher. This has now led to a new
way of refining and enhancing the solutions to a wide class of
inverse problems. I will discuss all this and present image
restoration results which appear to be stateoftheart.
Joint work with Jinjun Xu, Wotao Yin and Donald Goldfarb
«
Back
A fast and adaptive surface
reconstruction algorithm based on implicit Bspline surfaces
Falai Chen, Dept. of Mathematics, University of Science and
Technology of China
Based on implicit Bspline surfaces, we present a fast and
adaptive algorithm to reconstruct surfaces from pointclouds.
Our algorithm is driven by a surface fitting model, which
amounts to solve some qudratic optimization problem. We
explore the matrix form of the surface fitting model, and put
forward a fast and low memory consumption algorithm to solve
the optimization problem. In addition, for nonuniform
sampling point sets, methods to adaptively choose the knot
vectors of the implicit Bspline surface are presented. Some
examples are provided to illustrate effectiveness of the
algorithm.
«
Back
Improved splittingintegrating methods for image
geometric transformations
ZiCai Li, Dept. of Applied Mathematics, National Sun
YatSen University
Download/View PDF
«
Back
Logic operators for active contours
on multichannel images
Tony Chan, Dept. of Mathematics, University of California,
Los Angeles
We present a mathematical framework for active contour
object detection in multichannel images. Our framework allows
the user to define arbitrary logic operations to combine
object information from the different channels. Specific
models are derived based on the singlechannel region based
"Active contour without edges" models. Numerical examples will
be presented to show that the models can robustly detect
objects in both synthetic and realistic images defined by
general logical combinations.
«
Back
A hierarchical Hough transform for
fingerprint matching
Tao Xia, CWAIP, National University of Singapore
This talk addresses the improvement on the generalized
Hough transform for the biometric identification applications.
The errors in generalized Hough transform for fingerprint
matching are investigated and a new hierarchical Hough
transform algorithm is proposed, which is faster and more
accurate compared to conventional generalized Hough transform.
«
Back
LBIC imaging of semiconductor
arrays
Weifu Fang, Dept. of Mathematics, West Virginia University,
USA
Laser beam induced current (LBIC) is a nondestructive
technique that has been used for a number of years to
qualitatively examine large arrays of pn junctions,
especially in HgCdTe infrared focal plane arrays. In this
talk, we will present PDE models for the LBIC imaging
technique, and employ homogenization method to derive
approximations of the LBIC images of large arrays. Such
approximations reduce the computational burden in simulations
of these LBIC images. We will then illustrate the application
of our approximations for the purpose of recovering array
parameters from the LBIC images.
«
Back
Improving learning machines with
optimization strategies
Juan Manuel Górriz Sáez Escuela Politécnica Superior de
Algeciras, Spain
In this work we establish new bounds on for the actual risk
functional of a new online parametric machine for time series
forecasting based on VapnikChervonenkis (VC) theory. Using
the strong connection between support vector machines (SVM)
and Regularization theory (RT), we propose a
regularization operator in order to obtain a suitable
expansion of radial basis functions (RBFs) with the
corresponding expressions for updating neural parameters. This
operator seeks for the "flattest" function in a feature space,
minimizing the risk functional and controlling the capacity of
the learning machine. The performance of the learning machine
can be improved using processing techniques such as Principal
Components Analysis or Independent Co mponent Analysis.
Finally we mention some modifications and extensions that can
be applied to control neural resources (complexity control)
and select relevant input space (suitable expansion) to avoid
high computational effort i.e solving convex optimization
problem.
«
Back
New idea of segmentation method by
using Riemannian geometry
Shen Chunli, Dept. of Mathematics, East China Normal
University, China
In the image processing, there are many segmentation
methods which can take out the useful contour from the image,
such as the wellknown geometric active contour method,
geodesic active contour method, or various isotropic diffusion
method and anisotropic diffusion method. Now we have proposed
a new viewpoint which could deduce the above methods by a
unified idea from Riemannian geometry, and those diffusion
coefficients or diffusion tensor in the above methods could be
obtained intrinsically by the geometrical properties of
Riemannian geometry.
«
Back
Unitary matrix functions,
wavelet algorithms, and structural properties of wavelets
Palle Jorgensen, University of Iowa, USA
Some connections between operator theory and wavelet
analysis will be presented in three lectures. Since the mid
eighties, it has become clear that key tools in wavelet
analysis rely crucially on operator theory. While isolated
variations of wavelets, and wavelet constructions had
previously been known, since Haar in 1910, it was the advent
of multiresolutions, and subband filtering techniques which
provided the tools for our ability to now easily create
efficient algorithms, ready for a rich variety of applications
to practical tasks. Part of the underpinning for this
development in wavelet analysis is operator theory. This will
be presented in the lectures, and we will also point to a
number of developments in operator theory which in turn derive
from wavelet problems, but which are of independent interest
in mathematics. Some of the material will build on chapters in
a new wavelet book, coauthored by the speaker and Ola
Bratteli, see
http://www.math.uiowa.edu/~jorgen/.
 Part I. Scaling identities in wavelet analysis, and systems
of operators in Hilbert space
Aspects of wavelet theory may be formulated in terms of
operator Systems, scaling operator, the translations, and the
transition operator. We outline this approach using Hilbert
space geometry, and we give a number of examples. We show how
the pyramid algorithms may be given a geometric formulation in
this framework, and we recall the fundamentals of wavelet
algorithms and of wavelet packet constructions.
 Part II. Wavelet algorithms computed with unitary matrix
functions (also called polyphase matrices)
Frequency subband filters may be assembled into a single
function, but it is then operator valued. If the number of
masking coefficients is finite, it takes values in the unitary
matrices, where the size of the matrix depends on the number
of masking coefficients. We show how factorization theory for
these matrix functions may be used in the analysis and the
classification of wavelets. This is joint work with Ola
Bratteli.
 Part III. Some special wavelets with general multiplicity
configurations
We recall some fundamentals from the theory of translation
invariant subspaces, as well as the notion of multiply
generated resolution subspaces. We outline results from two
recent joint papers with L.Baggett, K. Merrill, and J. Packer.
Multiplicity theory is needed for the study of
frequencylocalized wavelets, including wavelet sets. Our
formulation allows new wavelet construction; for example, we
sketch wavelet constructions in Hilbert spaces other than
L^{2}(R^{d}). A number of examples are given, including wavelet
constructions for fractals; the fractal theory is joint work
with Dorin Dutkay.
References:
[1] math.CA/0407330 Title: Martingales, endomorphisms, and
covariant systems of operators in Hilbert space
Authors: Dorin
Ervin Dutkay, Palle E.T. Jorgensen
[2] math.CA/0403117 Lecture Notes for the Singapore
workshop. Title: Unitary matrix functions, wavelet
algorithms,
and structural properties of wavelets Authors: Palle E. T.
Jorgensen
[3] math.CA/0305443 Title: Wavelets on Fractals Authors:
Dorin E. Dutkay, Palle E. T. Jorgensen
[4] math.CA/0405301 Title: Construction of Parseval
wavelets from redundant filter systems Authors: L. W.
Baggett,
P. E. T. Jorgensen, K. D. Merrill, J. A. Packer
[5] math.OA/0308131 An analogue of BratteliJorgensen loop
group actions for GMRA's Authors: L. W. Baggett,
P. E. T.
Jorgensen, K. D. Merrill, J. A. Packer
[6] Ola Bratteli, Palle E. T. Jorgensen, “Wavelets through
a looking glass”; Applied and Numerical Harmonic
Analysis, Birkhäuser, Boston, 2002.
«
Back
Greedy algorithms in stable
recovery of sparse overcomplete representations
Vladimir Temlyakov, Department of Mathematics, University
of South Carolina
Overcomplete representations are attracting interest in
signal processing theory, particularly due to their potential
to generate sparse representations of signals. However, in
general, the problem of finding sparse representations must be
unstable in the presence of noise. We prove the possibility of
stable recovery under a combination of sufficient sparsity and
favorable structure of the overcomplete system.
We consider the situation where an ideal underlying signal
indeed has a sparse representation but we can observe only a
noisy version. We assume that the overcomplete system has a
property of {\it mutual incoherence}, and that the ideal
underlying signal has a sufficiently sparse representation,
according to a sparsity threshold defined using the mutual
coherence of the overcomplete system. Under these conditions,
we show that the optimallysparse approximation to the noisy
data, to within the noise level, differs from the
optimallysparse decomposition of the ideal noiseless signal
by at most a constant multiple of the noise level. It turns
out that the Orthogonal Greedy Algorithm is a good method for
approximate recovery of signals that have sparse
representation.
«
Back
Highly sparse expansions from
redundant dictionaries are unique and independent of the
sparseness measure
Remi Gribonval, IRISAINRIA, France
A series of recent results shows that if a signal admits a
sufficiently sparse representation (in terms of the number of
nonzero coefficients) in an“incoherent” dictionary, this
solution is unique and can be recovered as the unique solution
of a linear programming problem. We generalize these results
to a large class of sparsity measures which includes but is
not restricted to the ℓ^{p}sparsity
measures for 0 ≤ p
≤ 1. We give sufficient conditions
on a signal such that the simple solution of a linear
programming problem simultaneously solves all the nonconvex
(and generally hard combinatorial) problems of sparsest
representation w.r.t. arbitrary admissible sparsity measures.
Our results should have a practical impact on source
separation methods based on sparse decompositions, since they
indicate that a large class of sparse priors can be
efficiently replaced with a Laplacian prior without changing
the resulting solution.
«
Back
Subdivision scheme tuning around
extraordinary vertices
Leif Kobbelt, RWTH Aachen University
We extend the standard method to derive and optimize
subdivision rules in the vicinity of extraordinary vertices (EV).
Starting from a given set of rules for regular control meshes,
we tune the extraordinary rules (ER) such that the necessary
conditions for C^{1} continuity are satisfied along
with as many necessary C^{2} conditions as possible.
As usually done, our approach sets up the general
configuration around an EV by exploiting rotational symmetry
and reformulating the subdivision rules in terms of the
subdivision matrix' eigencomponents. The degrees of freedom
are then successively eliminated by imposing new constraints
which allows us, e.g., to improve the curvature behavior
around EVs. The method is flexible enough to simultaneously
optimize several subdivision rules, i.e. not only the one for
the EV itself but also the rules for its direct neighbors.
Moreover it allows us to prescribe the stencils for the ERs
and naturally blends them with the regular rules that are
applied away from the EV. All the constraints are combined in
an optimization scheme that searches in the space of feasible
subdivision schemes for a candidate which satisfies some
necessary conditions exactly and other conditions
approximately. The relative weighting of the constraints
allows us to tune the properties of the subdivision scheme
according to application specific requirements. We demonstrate
our method by tuning the ERs for the wellknown Loop scheme
and by deriving ERs for a √3type
scheme based on a 6direction Boxspline.
«
Back
The BrambleHilbert Lemma and
Whitney estimates for convex domains: applications to
multivariate piecewise polynomial approximation
Dany Leviatan, School of Mathematics, Tel Aviv University
Download/View PDF
« Back
Support vector machine soft margin
classifiers
DingXuan Zhou, Department of Mathematics, City University
of Hong Kong
Learning Theory studies learning objects from random
samples. Classification (binary) algorithms learn from samples
ways of dividing an input space into two classes.
In this talk we discuss the qnorm soft margin
classifier, a support vector machine classification algorithm.
Our purpose is to provide a PAC error analysis for these
algorithms. The misclassification error is bounded by the
generalization error associated with a general class of loss
functions. The difficulty of bounding the offset is overcome.
The convergence rate is then provided by means of the
regularization error and the approximation error for
reproducing kernel Hilbert spaces. A projection operator is
introduced to have better sample error estimates. As a
corollary we give almost optimal error bounds for
deterministic probability distributions. Even in the well
known strictly separable case, our analysis yields the best
error bound concerning the influence of the margin. Polynomial
kernels and universal kernels are used to demonstrate the main
results.
« Back
Can the FamaFrench threefactor
model explain the crosssection of stock returns using wavelet
analysis?
Francis In, Department of Accounting and Finance, Monash
University, Australia
This paper examines whether the Fama and French
threefactor model can explain the crosssection of stock
returns over various time scales using a new approach. The new
approach is based on a wavelet multiscaling method that
decomposes a given time series on a scalebyscale basis. Fama
and French’s three factors are the market excess return, SMB,
and HML. The empirical results provide strong support for the
role of the market factor in explaining the crosssectional
variation of average stock return over all time scales. SMB is
shown to play a role in explaining the crosssection of
average returns over all time scales for small stocks, but not
large stocks, while HML is effective in the short run, but not
in the intermediate and long run.
« Back
The analysis of intrinsic mode functions and instantaneous
phase
Robert C. Sharpley, Industrial Mathematics Institute,
University of South Carolina
Intrinsic Mode Functions (IMF's) were introduced by N.
Huang and his collaborators as elementary monocomponents
signals which arise through the Empirical Mode Decomposition (EMD)
process for 1D signals. These monocomponents have been highly
successful in analyzing instantaneous phase for signals in a
number of disciplines. The EMD method produces highly
nonlinear redundant decompositions which are tailored to the
signal, but currently lacks a theoretical base.
This presentation will describe recent work of the speaker
and V. Vatchev on
 characterization of IMF's as solutions to SturmLiouville
equations,
 comparison of the Hilbert transform method (i.e.
analytic signals) for computing instantaneous phase against
quadrature methods. In particular, extremely nice signals
(both discrete and in closed form) will be presented for
which the phase is nonmonotone.
 mode classification of the IMF's of a signal through a
Prufer quadrature method.
« Back
Ultrasonic testing based on
wavelet and JTFA
Xiufeng Zhang, Dept. of Mechanical Engineering, Tsinghua
University, Beijing, China
Materials with coarse grains are utilized widely in
industries, for example, austenitic steel is employed in the
pressure vessels and pipes of nuclear, chemical industry. In
ultrasonic nondestructive evaluation of these strong sound
scattering materials the serious backscattering noise may
attain peak values greater than the searched flaw pulse, and
distort the flaw echoes. Theory and practice confirm that due
to the timevarying characteristic of defect signals, it is
more reliable to analysis it within a joint timefrequency
representation than pure timedomain or frequencydomain. The
joint timefrequency analysis (JTFA) of ultrasonic signals
instantaneously reveals the frequency and time of arrival of
target echoes which help to characterize the target. But
varying results are obtained by applying different
timefrequency (tf) algorithms to ultrasonic signals.
Performance evaluation of tf algorithms for ultrasonic
testing is useful for the choice of an algorithm. In this
paper we present a performance comparison of tf algorithms
for ultrasonic applications. In particular, we compare the tf
algorithms based on the detection of the time of arrival and
frequency of target echoes, estimation of instantaneous
frequency of multiple target echoes, crossterms attenuation
and concentration, and signaltonoise ratio (SNR)
enhancement. These characteristics are studied by applying
WignerVille distribution (WVD), exponential distribution
(ED), Gabor transform (GT), and wavelet transform (WT) to
simulated and experimental ultrasonic data. The results
indicate that WT tracks the frequency trend of the grain
echoes and detects the flaw echoes better than all the other
distributions discussed.
« Back
Second generation wavelets for
GIS data compression
Biswajeet Pradhan, Faculty of Engineering & Computer
Technology, Asian Institute of Medicine, Science & Technology,
Malaysia
Optimization of the size of terrain data set for generation
of geographical surface images on computer with different
level of details is a challenging task. The wavelet technique
which allows to present information at different level of
resolution is used in this problem. Triangulated Irregular
Network (TIN), generated by using Delaunay Triangulation
technique, is used to form the terrain from an arbitrary set
of data. The data with the least effect in the representation
of surface are selected and eliminated by using second
generation wavelets. The quality of terrain representation
after using proposed technique is compared with the original
representation. The results show that this method can be used
for significant reduction of data set.
« Back
Stability of algorithms and
generalization
Sayan Mukherjee, Center for Biological and Computational
Learning, Massachusetts Institute of Technology
Supervised learning can be summarized as given follows:
given a set of inputouput pairs a function is learned that
hopefully accurately predicts future observations of inputs.
Thus the problem of learning can be thought of as a map from a
dataset S to a function f. In classical learning theory the
algorithm used to learn the function is empirical risk
minimization (ERM): select a function from a class of
functions that minimizes the error on the observed dataset.
Classic results for this algorithm state that under
appropriate constraints on the function class the function
learnt will generalize, that is the error rate of future
predictions will be the same as the obserevd error rate. In
this talk we introduce a constraint on the map from the data
to the function that ensures generalization. For the case of
ERM we show that this condition is equivalent to the classical
constraints on the function class. This formulation allows us
to analyze functions from ERM to support vector machines to
knearest neighbors using the same mathematical tool.
« Back
Adaptive wavelet methods for
modeling and computing turbulent flows
Kai Schneider, L3MCNRS & CMI, Université de Provence,
Marseille
This work is joint work with Marie Farge (ENS, Paris) and
Giulio Pellegrino (L3M, Marseille).
Recently we have introduced a new waveletbased method,
called Coherent Vortex Simulation (CVS), to compute turbulent
flows. The main idea is to split the flow into two orthogonal
parts, a coherent flow and an incoherent background flow,
using nonlinear filtering of vorticity. We have shown that the
coherent flow is responsible for the nonlinear dynamics, and
thus its evolution is deterministically computed in an
adaptive wavelet basis, while the incoherent flow is
noiselike, structureless and decorrelated. Therefore its
influence is statistically modeled. Since the coherent part is
described by only few wavelet modes, it is possible to reduce
the computational cost, both in terms of memory requirement
and CPU time. The developed adaptive wavelet solver for the 2D
NavierStokes equations uses a semiimplicit time
discretization and a PetrovGalerkin scheme in space with
wavelets as trialfunctions and vaguelettes (operator adapted
wavelets) as testfunctions. To take into account noslip
boundary conditions and complex geometries we coupled the
adaptive scheme with a volume penalization technique. In the
talk we shall present different applications of the Coherent
Vortex Simulation to 2D and 3D turbulent flows like 2D flows
past different bluff bodies and a 3D turbulent mixing layer.
« Back
Ideal interpolation
Carl de Boor, University of Wisconsin  Madison
Starting off with G. Birkhoff's definition of `ideal
interpolation', multivariate polynomial interpolation and its
error is explored.
« Back
Adaptive multivariate approximation
using binary space partitions and geometric wavelets
Shai Dekel, RealTimeImage
The Binary Space Partition (BSP) technique is a simple and
efficient method to adaptively partition an initial given
domain to match the geometry of a given input function. Here,
we offer an enhancement to the BSP algorithm, which is in the
spirit of geometric wavelets. This new approach is based on
recent development in the theory of multivariate nonlinear
piecewise polynomial approximation. We provide numerical
examples of nterm geometric wavelet approximations of
known test images and discuss applications to image denoising
and compression. Preprints can be downloaded from:
https://turbo.turboimage.com/~sd/default.html
« Back
Generalized shiftinvariant
frames and duals for subspaces
Ole Christensen, Department of Mathematics, Technical
University of Denmark
Download/View PDF
« Back
Foldable figures, Fuchsian
groups, and multiscale structures
Peter Massopust, TIPS  Tuboscope Integrated Pipeline
Solutions
When defining a multiscale structure one often uses a
dilation and a translation operator to pass information
between and within the multiscales. The translation operator
can be thought of as a means of tiling a multiscale level,
starting with a fundamental domain that usually is a lattice
cell. However, other tilings of multiscale levels such as
those obtained be reflection instead of translation work as
well. In this lecture we will illustrate this approach by
considering foldable figures in Euclidean space and Fuchsian
groups in the complex upper half plane or, equivalently, the
Poincaré disk.
« Back
On the existence of
multiresolution analysis for framelets
Marcin Bownik, University of Oregon
We show that a compactly supported tight framelet comes
from an MRA if the intersection of the dilations of the space
of negative dilates, which is defined as the shift invariant
space generated by the negative scales of a framelet, is
trivial. We also construct examples of nontight framelets,
which are nevertheless arbitrarily close to tight frame
framelets, such that the corresponding space of negative
dilates is equal to the entire space L^{2}(R). This talk is
based on a joint work with Ziemowit Rzeszotnik.
« Back
Construction of Parseval wavelets
from a redundant system of filters
Larry Baggett, University of Colorado, USA
We generalize the classical infinite product construction
of Mallat and Meyer of a multiresolution analysis and an
orthonormal wavelet from a quadrature mirror filter $h.$ Our
generalized construction is based on an extended notion of a
filter to a matrix $H$ of periodic functions, which satisfies
a more complex higher dimensional filter equation. The proof
is more abstract than in the classical case and involves the
analysis of a pair of partial isometries that satisfy a kind
of Cuntz algebra relation.
« Back
Nonlinear subdivision schemes for
curves
Nira Dyn, School of Mathematical Sciences TelAviv
University, Israel
This talk is concerned with several nonlinear subdivision
schemes for curves, which are extensions of linear curve
subdivision schemes. A class of curve subdivision schemes on
manifolds is defined in terms of geodesic averages, replacing
the usual averages between points, with which it is possible
to represent in an alternative way affine invariant linear
subdivision schemes with masks of finite support. In
particular, explicit such presentation will be given for the
quadratic and cubic Bspline schemes and for the 4point
interpolatorscheme. Another class of nonlinear schemes on
manifolds is obtained by projecting the affine averages to the
manifold. These two constructions of nonlinear schemes from
linear ones, apply also to Lie groups and in particular to
matrix groups as the Euclidean motion group, with geodesics
replaced by exponential mappings, and to abstract manifolds
such as the hyperbolic plane. The analysis of such a
nonlinear schemes is done by its proximity to the
corresponding linear Euclidean scheme from which it was
derived. The proximity condition, holds for the above
mentioned two constructions. Two other nonlinear extensions of
the 4point interpolatory subdivision scheme will be
discussed. One extension is for control points in Euclidean
space. The refinement of the control points is driven by their
geometry, a nonlinear feature which makes it possible to
control the shape of the generated curve. The other extension
is confined to functional data, and is motivated by the idea.
This talk reports on three joint works, one with J. Wallner,
one with D. and M. Marinov and one with A. Cohen and B. Matei.
« Back
MRA framelets and their connections to Riesz wavelets
Bin Han, Department of Maths and Stat Sciences, University
of Alberta
Download/View PDF
« Back
Riesz wavelets from the loop scheme in computer graphics
Bin Han, Department of Maths and Stat Sciences, University
of Alberta
Download/View PDF
« Back
Wavelet transforms and admissible
group representations
Eric Scott Weber, Iowa State University, USA
Most continuous wavelet transforms arise from squareintegrable
group representations, and discrete wavelet transforms arise
from discretizing, or sampling, the continuous transforms. We
will talk about necessary and sufficient conditions for there
to be either a continuous or a discrete wavelet transforms in
terms of a generalization of squareintegrability. We will
also discuss the relationship of these conditions to several
notions of multiresolution analysis.
« Back
Image compression by linear splines
over adaptive triangulations
Nira Dyn, Tel Aviv University, Israel
A new method for image compression is presented. The method
is based on the approximation of an image, regarded as a
function, by a linear spline over an adapted triangulation,
D(Y), which is the Delaunay triangulation of a
small set Y of significant pixels. The linear
spline minimizes the mean square error to the image, among all
linear splines over D(Y). The significant pixels
in Y are selected by an adaptive thinning
algorithm, which recursively removes less significant pixels
in a greedy way, using a sofisticated measure of the
significance of a pixel. The proposed compression method
combines the approximation scheme with a customized scattered
data coding scheme. We demonstrate that our compression method
outperforms JPEG2000 on two geometric images and performs
competitively with JPEG2000 on three popular test cases of
real images.
This is a joint work with L. Demaret and A. Iske.
« Back
Wavelets on triangulations
Doug Hardin, Mathematics Department, Vanderbilt University
I will review several constructions of wavelets and
prewavelets on triangulations. In particular, I will present a
recent construction of continuous, orthogonal, piecewise
polynomial wavelets that may be adapted to a semiregular
sequence of triangulations. This construction is joint work
with G. Donovan, J. Geronimo, and B. Kessler.
« Back
Geometric image representation
with Bandelets
Erwan LE Pennec, Ecole Polytechnique
The wavelet representation is currently one of the most
used representation for images. This representation is optimal
for smooth images but not for piecewise smooth image. The
bandelet representation is one of the geometric representation
that overcomes this limitation of the wavelets.
We present here the construction of this adaptive
representation, explain how to choose the geometry defining
the best bandelet representation in an nonlinear
approximation setting and show how to apply this framework to
compression and estimation.
« Back
The Bandelet representation:
algorithms and applications
Erwan LE Pennec, Ecole Polytechnique
The wavelet representation is currently one of the most
used representation for images. This representation is optimal
for smooth images but not for piecewise smooth image. The
bandelet representation is one of the geometric representation
that overcomes this limitation of the wavelets.
After a brief presentation of the bandelets representation
and its properties, we describe here their practical discrete
implementation and show how to adapt them to various settings
: image compression, seismic data restoration,...
« Back
Waveletbased extraction of
coherent structures to analyze and compute NavierStokes
equations
Marie Farge, LMDCNRS, ENS Paris
Joint work with Kai Schneider, CMI, Universite de Provence,
Marseille, France
Turbulent flows are solutions of the NavierStokes
equations in the limit where the nonlinear advection term
strongly dominates the linear dissipation term. Their highly
nonlinear dynamics leads to the formation of coherent
structures and exhibits a multiscale behaviour, characterized
by a powerlaw energy spectrum.
In the first part we will present the waveletbased method
we have proposed [Phys. Fluids, 11(8), 1999 and PRL 87(55),
2001] to extract coherent vortices out of two and three
dimensional turbulent flows. We will show that the coherent
structures are responsible for the nonlinear dynamics, as for
the nonGaussian statistics and longrange spatial
correlation. In contrast, the remaining incoherent modes
exhibit a diffusive behaviour, with quasiGaussian statistics
and spatial decorrelation.
In the second part, we will present the CVS (Coherent
Vortex Simulation) method we have proposed to compute
turbulent flows [Flow, Turbulence and Combustion, 66 (4),
2001]. It splits each flow realization into two orthogonal
parts, a coherent and an incoherent flow, using the previous
waveletbased method, and computes the evolution of the
coherent flow, while the incoherent modes are discarded at
each time step to model turbulent dissipation.
To download our papers: //wavelets.ens.fr
« Back
A recursive algorithm for nonlinear wavelet thresholding
: Applications to signal and image processing
Marie Farge, LMDCNRS, ENS Paris
Download/View PDF
« Back
Pseudosplines, wavelets and framelets
Dong Bin, National University of Singapore
Download/View PDF
« Back
Reconstructing multivariate functions from scattered
data efficiently
Holger Wendland, Institut für Numerische und Angewandte
Mathematik, GeorgAugust Universität Göttingen
Download/View PDF
« Back
Localized frames and sparsity
Karlheinz Gröchenig, Institute of Biomathematics and
Biometry, Germany
We present a new concept to measure the quality of frames,
namely "localization'' of frames which is defined by the decay
properties of the associated Gramian matrix. Using Banach
algebra methods, it will be shown that the canonical dual
frame possesses the same localization properties. As a
consequence, elements in a class of naturally associated
Banach spaces possess sparse representations. We will further
address the solution of problems on nonuniform Gabor frames
and on sampling in shiftinvariant spaces. (Part of this is
joint work with M. Fornasier)
« Back
The role of Banach algebra
methods in numerical analysis and sampling theory
Karlheinz Gröchenig, Institute of Biomathematics and
Biometry, Germany
We will give a survey of results in applied analysis where
Banach algebra methods have led to progress and new results.
Specifically, we will discuss
(a) several new estimates for the offdiagonal decay of the
inverse of an (infinitedimensional) matrix,
(b) the convergence properties of the finite section method in
numerical analysis and give new error estimates,
(c) several measures for the quality of a frame and show that
the dual frame and the associated tight frame possess the same
quality,
(d) sparse representations with frames, and finally
(e) almost diagonalization of pseudodifferential operators
(=timevarying systems) and their symbolic calculus.
« Back
High resolution image
reconstruction: a framelet approach
Sherman D. Riemenschneider, Department of Mathematics, West
Virginia University
(joint with Raymond Chan, Lixin Shen and Zuowei Shen)
High resolution image reconstruction refers to the
construction of highresolution images from images from
multiple lowresolution, shifted, and degraded samples of a
true image. We review the model proposed by Bose and Boo and
analyze it in a framelet setting and propose a framelet based
iterative algorithm.
« Back
Dual Gramian analysis
Zuowei Shen, Dept. of Mathematics, National University of
Singapore
The dual Gramian analysis has been used in the study of the
frame properties of shift invariant systems. It allows us to
decompose the frame operator into a collection of simpler
operators (fibers) in the Fourier domain. Each one of the
`fiber’ operators acts from a sequence space to the same
sequence space and its matrix representation can be explicitly
described in terms of the Fourier transforms of the generators
of the shift invariant system. Application to the Gabor
systems which are shift invariant shows the power of the
analysis. However, it cannot be directly applied to many other
systems such as the wavelet systems, which are not shift
invariant. This leads us to consider the dual Gramian analysis
in a more general setting, i.e. general shift invariant
systems which include wavelet system as a special case. In
this talk, I will give the development and the recent results
on the dual Gramian analysis.
« Back
Unitary extension principle and
applications
Zuowei Shen, Dept. of Mathematics, National University of
Singapore
Since the unitary extension principle was published in
1997, it has led to much theoretic development and has been
used in various applications. The unitary extension principle
provides a great flexibility of designing tight frame wavelet
filters and makes constructions of tight frame wavelets
painless. In this talk, I will first briefly review the recent
development based on or motivated by the unitary extension
principle. Then I will focus on a few applications that use
the unitary extension principle to design tight frame based
algorithms. In particular, the power of this principle is
illustrated by showing how it is used in solving various
problems in the area of the high resolution image
reconstruction.
« Back
Tight wavelet frames with symmetry
Say Song Goh, Department of Mathematics National University
of Singapore
Symmetric and antisymmetric compactly supported wavelets
are much desired in many practical applications. In this talk,
we shall provide a general, and yet simple, method to derive a
new set of wavelets that are symmetric or antisymmetric from
any given set of wavelets. The affine system generated by the
new wavelets form a tight frame if the affine system generated
by the original wavelets is so. Further, when the original
wavelets are generated via a multiresolution analysis, the new
wavelets can also be derived from a multiresolution analysis.
As illustration, examples of new symmetric and antisymmetric
compactly supported wavelets obtained from the method will be
shown. This talk is based on joint work with Zuowei Shen and
Zhi Yuan Lim.
« Back
Multidisciplinary research projects
of the Centre for Wavelets, Approximation and Information
Processing
Say Song Goh, Department of Mathematics National University
of Singapore
The Centre for Wavelets, Approximation and Information
Processing of the National University of Singapore is a
multidisciplinary research centre in the Department of
Mathematics that emphasizes the synergy of mathematicians,
engineers and computer scientists. Its activities, comprising
basic research, applied research and systems development,
focus on mathematical signal, image and information
processing. In this talk, we shall introduce various
multidisciplinary research projects of the centre. These
include projects on image compression, document image
compression, video compression and summarization, underwater
signal denoising and separation, and timefrequency analysis,
feature extraction and classification of whale call signals.
« Back
Adaptive scattered data fitting by splinewavelets:
regularization and robust fitting
Angela Kunoth, Institut für Angewandte Mathematik und
Institut für Numerische Simulation, Universität Bonn
Download/View PDF
« Back
On vanishing moments for
multivariate wavelets
Maria Skopina, Saint Petersburg University
We consider wavelets with matrix dilation. An explicit
formula for masks providing vanishing moments is presented.
The class of interpolatary masks with vanishing moments is
also described. For an interpolatory mask, a formula for a
dual mask which also provides vanishing moments of the same
order is given. For this case, wavelet masks can be found
explicitly.
« Back
Spanning and sampling in Lebesgue,
Sobolev and Hardy spaces
HuyQui Bui,
University of Canterbury and Richard S. Laugesen,
University of Illinois
The ShannonWhittaker sampling theorem shows how to
reconstruct a bandlimited signal from its sampled values at a
discrete set of points, using the generating function sinc x. Wavelet and waveletframe theory, and
quasiinterpolation theory in the StrangFix tradition,
together form a collective effort to extend the
ShannonWhittaker sampling theorem to other classes of signals
and to other methods of sampling (analyzing) and
reconstructing (synthesizing). The former theory assumes the
generating function to have vanishing integral, and the latter
assumes a "constant periodization" condition.
This series of talks, part 1 by R.S. Laugesen and part 2 by
H.Q. Bui, presents a new development in the StrangFix
tradition. We show how to explicitly approximate an
"arbitrary" L^{p} function, using average
sampling and then reconstructing by an affine system whose
generating function has neither vanishing integral nor
constant periodization. In other words, the signal, the
analyzer and the synthesizer can all be essentially arbitrary.
The key idea is automatic cancellation of errors at different
scales.
Sobolev spaces can likewise be explicitly spanned, with the
StrangFix condition being reduced by one degree from previous
work.
The Hardy space H^{1} yields to this
approach also.
« Back
On the analysis of synthetic NMR signals
Jeff Kline, Department of Mathematics, University of
Wisconsin  Madison
Nuclear Magnetic Resonance (NMR) describes a physical
phenomenon exploited by a wide range of applications. In this
talk, I will focus on NMR in the context of protein analysis.
The data collected from protein analysis experiments is
expected to live in a very "thin" space of signals: linear
combinations of exponentially decaying sinusoids. Although
each signal in this space has a simple parametric
representation, recovery of this representation is
challenging. I will provide a brief description of NMR to
explain how it is used to analyze complex molecules, and then
I will summarize a new approach towards parameter recovery.
« Back
Constructing wavelet frames with
given properties
Alexander Petukhov, Department of Mathematics, University
of Georgia
We are going to talk about two algorithms for constructing
wavelet bases and frames. The first one gives the way to
construct compactly supported (anti)symmetric orthogonal
wavelets and tight wavelet frames for MRAs generated by
symmetric scaling function with an arbitrary integer dilation
factor. The second one generates a compactly supported dual
wavelet frame with the maximum number of vanishing moments for
a given wavelet frame system whose MRA has a given
approximation order. All symbols of obtained framelets and
scaling functions are polynomials (not rational functions).
This is the main difference of the presented algorithm from
the existing ones.
« Back
Fast algorithms for recovery of data irregularly
subsampled from redundant representation
Alexander Petukhov, Department of Mathematics, University
of Georgia
Download/View PDF
« Back
Universal algorithms for learning theory
Albert Cohen, Laboratoire JacquesLouis Lions, University
Pierre et Marie Curie, France
Download/View PDF
« Back
Analysis on multiresolution mosaic images
WenLiang Hwang, Academia Sinica, Taiwan
Image mosaicing is the act of combining two or more images
and is used in many applications in computer vision, image
processing, and computer graphics. It aims to combine images
such that no obstructive boundaries exist around overlapped
regions and to create a mosaic image that exhibits as little
distortion as possible from the original images. In the
proposed techniques, the tobecombined images are first
projected into wavelet subspaces. The images projected into
the same wavelet subspace are then blended. Our blending
function is derived from an energy minimization model which
balances the smoothness around the overlapped region and the
fidelity of the blended image to the original images.
Experiment results and subjective comparison with other
methods are given.
« Back
Ergodic dynamics in sigmadelta
quantization: tiling invariant sets and spectral analysis of
error
Sinan Güntürk, Department of Mathematics, New York
University
Sigmadelta quantization is an A/D conversion method in
which signals are represented by sequences that lie in a very
restricted set of discrete amplitudes, such as 0 and 1 only.
These sequences are very special in that for each input
signal, local averages of the output bit sequence must track
those of the input sample sequence, thus enabling simple
convolutional reconstruction. We will be concerned with the
approximation of constant functions only, which is a seemingly
basic case that presents surprisingly complex behavior.
An m'th order sigmadelta scheme can be translated into a
piecewise affine dynamical system on R^{m}. We first show that
stable schemes yield invariant sets that tile the phase space
by integer translations. In general these tiles may have a
finite multiplicity. However when the multiplicity is one, the
dynamics becomes isomorphic to a generalized skewtranslation
of the mtorus. We use this isomorphism to analyze the
asymptotics of the convolutional reconstruction error,
employing tools from Diophantine approximation and spectral
theory.
This is joint work with Nguyen Thao.
« Back
Projective modules and Hilbert modules for $C(\mathbb
T^n)$
Judith Packer, University of Colorado, USA
The concept of finitely generated projective modules and
Hilbert modules for unital C*algebras will be reviewed. The
emphasis will be on the study of these modules for the
C*algebra $C(\mathbb T^n),$ continuous complexvalued
functions on the ntorus. We will go over the case $C(\mathbb
T^2)$ in detail, with an emphasis on examples, and with an eye
towards their use in the theory of projective multiresolution
analyses in Talk 2.
« Back
Projective multiresolution analyses for $L^2(\mathbb
R^n)$
Judith Packer, University of Colorado, USA
The notion of “projective” multiresolution is defined, for
which, by definition, the initial space corresponds to a
finitely generated projective module over the algebra $C(\mathbb
T^n)$ of continuous complexvalued functions on the ntorus.
The case of ordinary multiwavelets is that in which the
projective module is a free module. The properties of
projective multiresolution analyses, including the frames
that they provide for $L^2(\mathbb R^n).$ Examples for n=2 and
n=3 with diagonal dilation matrices are then discussed, where
the initial module is not free. In these examples we compute
the isomorphism class of the wavelet module. This work is
joint with Marc A. Rieffel.
« Back
Some operator relations for
normal operators and connections to multiresolution theory
Ola Bratteli, University of Oslo, Norway
This is joint work in progress with Palle E. T. Jorgensen.
The starting point is Baggett and Merrills construction of
scaling functions and wavelet functions from the action of
dilations and translations on a preferred scaling subspace.
The method can be extended to construct wavelet frames on more
nontraditional structures by replacing the circle and the map
z > z^N by more general dynamical systems as, say, a Julia
set and the action of the corresponding rational function on
the set. On the way we construct solutions of operator
relations of the form UT = P(T)U where U is unitary, T is
normal and P is a rational or more general function.
« Back
Caplets: wavelets without wavelets
Amos Ron, Department of Computer Sciences, University of
WisconsinMadison, USA
Download/View PDF
« Back
The romance of hidden components
David Donoho, Statistics Department, Stanford University,
USA
Perhaps the most romantic and seductive idea in all of
science is that, hiding behind the enormously complex
structures we see in the world around us, there are hidden
components that are on the one hand very simple and even
elegant and on the other hand easily combine to generate all
the variety we see about us. Classical examples include Newton
and the spectrum of light, Eugenecists and the idea of IQ;
modern examples include wavelets and quarks. I will review
some of the classical ideas of hidden components, starting
from principal components or even before, and describe some of
the most recent notions, such as independent components
analysis, sparse components analysis, nonnegative matrix
factorizations, and cumulant components. I will try to keep
things at an elementary level, communicating the
attractiveness of these ideas to scientists and engineers
outside of statistics, the wideranging impact these ideas are
having from hightech industry to neuroscience and astronomy,
and describing what I think is the much greater role that
statisticians should be playing in developing and deploying
these methods.
« Back
The Calderon reproducing formula
and Brownian motion functionals
Richard Gundy, Rutgers University, USA
Recent work on local solutions of the NavierStokes
equation by Bhattacharya et al. has involved probability ideas
and has renewed interest in the "background radiation" process
in the upper halfplane in R^{4}. Stochastic
integral calculations with this process give representation
formulas for compositions of Riesz transforms. In this lecture
we will explain these computations in which a key role is
played by the Calderon reproducing formula.
« Back
Low pass filters, probability, and
statistical mechanics
Richard Gundy, Rutgers University, USA
Scaling functions for multiresolution analyses are obtained
from periodic functions known as lowpass filters. A. Cohen
found the first necessary and sufficient conditions for a
trigonometric polynomial to be a lowpass filter. In this
lecture, we summarize what can be said about Cohen's condition
when the candidate periodic function is not a polynomial. We
shift the domain of discussion from R^{1} (or
R^{n}) to a sequence space with a shift
operation, and interpret the basic objects as random variables
and probabilities on this sequence space. Cohen's conditions
have simple interpretations as conditions on certain
martingales. Moreover, this approach shows the limitations of
his approach: there are continuous lowpass filters (and
scaling functions) that do not satisfy his conditions,
contrary to some claims in the literature. This kind of
probability analysis is typical in statistical mechanics, and
parts of Ergodic theory.
« Back
Unitary systems, wavelet sets, and
operatortheoretic interpolation of wavelets and frames
David R. Larson, Texas A&M University
There are three basic parts:
 Part I. Unitary systems and wavelet sets
A wavelet is a
special case of a vector in a separable Hilbert space that
generates a basis under the action of a collection, or
"system", of unitary operators defined in terms of
translation and dilation operations. This approach to
wavelet theory goes back, in particular, to earlier work of
Goodman, Lee and Tang in the context of multiresolution
analysis. We will begin by describing the
operatorinterpolation approach to wavelet theory using the
local commutant of a system that was worked out by the
speaker and his collaborators a few years ago. This is
really an abstract application of the theory of operator
algebras, mainly von Neumann algebras, to wavelet theory.
The concrete applications of operatorinterpolation to
wavelet theory include results obtained using specially
constructed families of wavelet sets. In fact X. Dai and the
speaker had originally developed our theory of wavelet sets
specifically to take advantage of their natural and elegant
relationships with these wavelet unitary systems. We will
discuss some unpublished or partially published results that
are due to this speaker and his former students. Some of
these are related to published work of the Wutam Consortium.
We will also discuss some new results and open questions.
 Part II. Unitary systems and frames
A frame is a sequence of
vectors in a Hilbert space which is a compression of a basis
for a larger space. (This is not the usual definition in the
frame literature, but it is equivalent to the usual
definition. In this spirit, the usual "inequality"
definition can be thought of as an abstract characterization
of a compression of a basis.) Because of this compression
relationship between frames and bases, the unitary system
approach to wavelets (and more generally: wandering vectors)
is perfectly adaptable to frame theory. This idea was
developed into a theory a few years ago by D. Han and the
speaker. The use of the local commutant is along the same
lines. We will describe some new results, notably those of
the speaker in collaboration with WS. Tang and E. Weber.
And we will discuss some open questions.
 Part III. Decompositions of operators and operatorvalued
frames
We will discuss some joint work with K. Kornelson
and others on construction of frames with targeted
properties, and some joint work with V. Kaftal on a theory
of operatorvalued frames. This is indirectly related with
joint work with M. Frank on modular frames. These are all
related to targeted decompositions of positive operators.
« Back
Continuously differentiable
wavelets on triangulations
RongQing Jia, Department of Mathematical Sciences
University of Alberta, Canada
In this talk we will first review Bnet representations of
bivariate splines (piecewise polynomials) over triangulated
polygonal domains. By using the Bnet representation as a
convenient tool, we will be able to find a nested family of
spaces of continuously differentiable splines, which are
suitable for multiresolution analysis. Furthermore, we will
construct a basis of continuously differentiable wavelets with
short support. The global stability of the wavelet basis will
be established by employing certain techniques in the theory
of function spaces. Our wavelet basis will be useful for
numerical solutions of partial differential equations.
« Back
Approximation power of refinable
spaces
Olga Holtz, Department of Mathematics, University of
CaliforniaBerkeley
Distributional solutions to a vector refinement equation
generate shiftinvariant spaces, whose approximation power is
of interest in MRA, subdivision, and wavelets. The focus of
this talk is on the space of compactly supported
distributional solutions, which is always finitedimensional.
By extending the classical L_{2} notion of
approximation order to shiftinvariant subspaces of Sobolev
spaces it is possible to characterize the approximation order
of any shiftinvariant space generated by a compactly
supported distribution. The interplay among solutions to the
same refinement equation can be further explored to improve on
these generic results. To this end, the notion of universal
supervectors is introduced and connected to the notions of
polynomial reproduction, Sum rules, and Condition Z^{k}.
Finally, a novel approach is presented to derive approximation
power of refinable vectors from their smoothness.
« Back
Perturbation that does not affect the
regularity of a linear subdivision scheme  not even by a bit
Thomas Yu, Department of Mathematical Sciences, Rensselaer
Polytechnic Institute
Nira Dyn will probably talk about nonlinear manifold
subdivision schemes T derived from linear subdivision schemes
S; and after a proximity condition between S and T is
established, one can then conclude that T inherits certain
regularity (C^{k}) properties from S. An open
question is whether the nonlinearity in T may actually `take
away' a fraction of the regularity from that of S  it is
perceivable that a small nonlinear perturbation may take away
a little bit of smoothness.
I report three provable examples and also some
computationally observed examples of which certain
perturbation perturbation of a linear subdivision does not
deteriorate smoothness at all  not even by a fraction. Here
smoothness is measured by critical Holder exponent. These
examples (unrelated to the aforementioned subdivision schemes
of manifold data) include:  Lagrange interpolatory
subdivision operated on mildly irregularly spaced multigrids
(work of Daubechies, Guskov and Sweldens)  Median
Interpolating subdivision  pmean interpolating subdivision
I shall outline the techniques for proving these results,
and address many open questions that remain.
« Back
Biorthogonal Eigensystems of
scaling operators
Seng Luan Lee, Dept. of Mathematics, National University of
Singapore
Download/View PDF
«
Back
The dual weighted residual method
from a wavelet perspective
Wolfgang Dahmen, Rheinisch Westfälische Technische
HochschuleAachen, Germany
The dual weighted residual method (DWR) has been
attracting increasing attention over the past years as a
paradigm for so called goal oriented numerical schemes
primarily in connection with Finite Element discretizations.
In many complex numerical simulation tasks one is not
interested in the whole solution of a PDE but merely in some
local functional of the solution like point values,
local averages or integrals over a lower dimensional manifold.
“Goal oriented” then means to compute this value within some
target accuracy tolerance at possibly low numerical cost. In
particular, this could mean that the solution is only very
poorly approximated in parts of the domain which results in
significant savings in comparison with numerical schemes that
first determine a global approximation of the solution and
then apply the functional to that approximation. The approach
is based on error representations derived from duality
arguments. Since this representation can typically be bounded
by sums of local quantities of residual type the equilibration
of these quantities gives rise to adaptive resolution schemes.
While this has been observed to work often very well in
numerical applications no rigorous error estimates or
convergence proofs seem to be known, not even for linear model
problems. A principal obstruction is that the error
representation involves the unknown dual solution. The
relevant information on the dual solution has to be inferred
from approximations to the dual problem. The need to
understand the effect of the unresolved scales in those
approximations indicates the multiscale nature of the problem.
This talk reviews some recent joint work with A. Kunoth which
aims at deriving rigorous convergence results by recasting the
DWR method in a wavelet framework and exploiting recent
advances in the development of adaptive wavelet solvers.
«
Back
Discretizing manifolds via minimum energy points
Doug Hardin, Mathematics Department, Vanderbilt University
Download/View PDF
«
Back
Subdivision in geometric modeling
and computer graphics
Denis Zorin, New York University
In this tutorial I present on overview of multiresolution
techniques in geometric modeling and computer graphics,
focusing on constructions for surfaces of arbitrary topology.
Subdivision for arbitrary meshes is a central element of many
such constructions. I will review current research on
subdivision schemes of arbitrary meshes, including the basic
theory and applications and discuss theoretical problems.
 Part I: Overview of multiresolution in graphics and
geometric modeling.
Mutliresolution methods are used in many principal areas of
computer graphics: rendering, geometric modeling and
simulation. In this part of the tutorial I will present a
survey of existing applications and multiresolution
representations used in them.
 Part II: Subdivision surfaces: overview
Subdivision surfaces are the basis of one of the common
approaches to multiresolution geometric modeling. This part
will include a survey subdivision schemes for irregular
meshes, including the most common subdivision surface
constructions, multiresolution surfaces based on subdivision
and applications.
 Part III: Subdivision surfaces: theory
In this part I will discuss the most important results of
subdivision surface theory, primarily related to smoothness
of limit surfaces.
 Part IV: Approximation on surfaces; alternatives and
approaches
The final part of the tutorial will be devoted to
approximation of functions on surfaces, approximation by
subdivision bases in particular. In conclusion I will
discuss open problems and some alternative approaches.
«
Back
