## ~ Abstracts ~

A fast total variation minimization method for image restoration
Michael Ng, Hong Kong Baptist University, Hong Kong

In this talk, we study a fast total variation minimization method for image restoration. In the proposed method, we use the modified total variation minimization scheme to denoise the deblurred image.
An alternating minimization algorithm is employed to solve the proposed total variation minimization problem. Our experimental results show that the quality of restored images by the proposed method is competitive with those restored by the existing total variation restoration methods. We show the convergence of the alternating minimization algorithm and demonstrate that the algorithm is very efficient.

New algorithms in information science
Stanley Osher, University of California, Los Angeles, USA

The past few years have seen an incredible explosion of new (or revival of old) fast and effective algorithms for various imaging and information science applications. These include: nonlocal means, compressive sensing, graph cuts, Bregman iteration, as well as relatively old favorites such as the level set method and PDE based image restoration. I'll give my view of where we are, hopefully giving credit to all the creators of these new and exciting multiscale techniques.

Missing data recovery by tight-frame algorithms with flexible Wavelet shrinkage
Raymond Chan, Chinese University of Hong Kong, Hong Kong

The recovery of missing data from incomplete data is an essential part of any image processing procedures whether the final image is utilized for visual interpretation or for automatic analysis. In this talk, we first introduce our tightframe-based iterative algorithm for missing data recovery. By borrowing ideas from anisotropic regularization and diffusion, we can further improve the algorithm to handle edges better. The algorithm falls within
the framework of forward-backward splitting methods in convex analysis and its convergence can hence be established. We illustrate its effectiveness in few main applications in image processing: inpainting, impulse noise removal, super-resolution image reconstruction, and video enhancement.

Exploratory path planning and target detection
Richard Yen-Hsi Tsai, University of Texas at Austin, USA

I will present a recent work on planning a path through an unknown environment under various settings. In this talk, a robot is placed in an unknown environment and is supposed to map out the environment using what can be seen from its current and previous locations. When an unknown diffusive source is present, the robot, equipped with the appropriate sensor, is to move to a location so that the unknown source can be determined from the collected sensor data, and is under visual surveillance from the robot.

Image guided radiation therapy
Lei Xing, Stanford University, USA

Recent technical advances in planning and delivering IMRT provide an unprecedented means for producing exquisitely shaped radiation doses that closely conform to the tumor dimensions while sparing sensitive structures. The development of 3D CRT and IMRT places more stringent requirements on the accuracy of beam targeting. In practice, large uncertainties exist in tumor volume delineation and in target localization due to intra- and inter-organ motions. The utility of modern radiation technologies, such as 3D CRT and IMRT, cannot be fully exploited without eliminating or significantly reducing these uncertainties. The need to improve targeting in radiation treatment has recently spurred a flood of research activities in image-guided radiation therapy (IGRT).

While all RT procedures are image guided per se, traditionally, imaging technology has primarily been used in producing 3D scans of the patient’s anatomy to identify the location of the tumor prior to treatment. The verification of a treatment plan is typically done at the level of beam portals relative to the patient’s bony anatomy before patient treatment. In current literature, the term of IGRT is employed loosely to refer to newly emerging radiation planning, patient setup and delivery procedures that integrate cutting-edge image-based tumor definition methods, patient positioning devices and/or radiation delivery guiding tools. These techniques combine new imaging tools, which interface with the radiation delivery system through hardware or software, and state-of-the-art 3D CRT or IMRT, and allow physicians to optimize the accuracy and precision of the radiotherapy by adjusting the radiation beam based on the true position of the target tumor and critical organs. With IGRT, it is also possible to take tumor motion into account during RT planning and treatment.

Many IGRT solutions have been proposed to attack various aspects of the problem. Briefly, IGRT developments are focused in four major areas: (1) biological imaging tools for better definition of tumor volume; (2) time-resolved (4D) imaging techniques for modeling the intra-fraction organ motion; (3) on-board imaging system or imaging devices registered to the treatment machines for inter-fraction patient localization; and (4) new radiation treatment planning and delivery schemes incorporating the information derived from the new imaging techniques. In this talk recent developments of various available IGRT techniques will be highlighted, in particular, image guided respiration-gated RT and tracking of tumor motion. After haring the talk, it is hoped that the audience will have an overall picture of IGRT, grasp the principles of currently available gating and tracking techniques, find it easier to navigate themselves through the vast literatures of IGRT, and get a brief idea on how to implement the new IGRT techniques in their clinics.

Automatic micro-array spot segmentation using a snake-fisher model
Wen-Liang Hwang, Institute of Information Science, Academia Sinica, Taiwan

Inspired by Paragious and Deriche's work, which unifies boundary-based and region-based image partition approaches, we integrate the snake model and the Fisher criterion to capture, respectively, the boundary information and region information of microarray images. We then use the proposed algorithm to segment the spots in the microarray images, and compare our results with those obtained by commercial software. Our algorithm is automatic because the parameters are adaptively estimated % parameters used in the algorithm are adaptively estimated from the data without human intervention.

A texture synthesis approach to Euler's elastica inpainting
Tony Chan, University of California, Los Angeles, USA & The National Science Foundation, USA

We present a new automatic technique for wire and scratch removal (inpainting) that works well in both textured and non-textured areas of an image. Chan, Kang, and Shen introduced a technique for inpainting using an Euler's elastica energy-based variational model that works well for repairing smooth areas of the image while maintaining edge detail. The technique is slow, due to a stiff 4th order PDE solution. Efros and Leung's texture synthesis techniques can be used for inpainting, which works well for areas of an image that contain repeating patterns. We have combined these two techniques to accelerate and constrain the solution of the 4th order PDE. Instead of a stiff minimization, we have a combinatorial optimization problem that is much quicker to solve and more stable.

Joint work with Kangyu Ni (UCLA) and Doug Roble (Digital Domain)

Introduction to mathematical imaging
Hui Ji, National University of Singapore, Singapore
Andy M. Yip, National University of Singapore, Singapore

In the first part of the course, we will introduce some mathematical methods for image denoising, deblurring, blind deconvolution and inpainting. The emphasis is on variational approaches. In the second part, we will introduce methods for super-resolution, image stitching and image-based rendering.

From splines and wavelets to mathematics of imaging
Charles Chui, University of Missouri, St. Louis, USA & Stanford University, USA

This is an introductory short course on the introduction and study of the variational approach to Mathematics of Imaging from the spline and wavelet points of view. The basic theory and methods of spline functions and wavelet analysis are discussed, followed by formulation of certain PDE models for such applications as image de-noising and image inpainting. No prior knowledge of splines, wavelets, and PDE is assumed.

Wavelets techniques in multifractal analysis of images: some theoretical results
Stéphane Jaffard, Université Paris XII, France

The purpose of multifractal analysis is to determine the fractal dimensions of the sets of singularities of a function or a measure. In applications, this is not directly feasible, and these dimensions are estimated with the help of a "multifractal formalism'', which allows to derive them from numerically computable quantities; the "scaling functions''. We discuss the possible alternatives for defining such scaling functions, and the range of validity of the formulas which are thus obtained. We will focus on the particular case of 2D measures analyzed through wavelet techniques, since it is particularly relevant for applications in image processing.

Exemplar-based inpainting from a variational point of view
Simon Masnou, Université Pierre et Marie Curie, France

Inpainting refers to the restoration of missing or damaged parts in a digital image. First introduced in the context of texture synthesis, the methods based on sampling and copying valid patches appear to be very efficient. Until now, very few works have tried to explain these performances from a theoretical point of view, among which a noticeable contribution involving probabilistic tools that justifies the efficiency of the method for resynthesizing a texture. In a recent collaboration with Jean-François Aujol (Cachan, France) and Said Ladjal (Paris, France), we have developed a variational approach to explain the performances of the exemplar-based methods for reconstructing the geometry. This approach shows interesting connections with the theory of elasticity. My talk will focus on the justification of the model, its mathematical properties and some numerical consequences.

Homogeneous approximation property for continuous wavelet transforms
Wenchang Sun, Nankai University, China

The homogeneous approximation property (HAP) for frames is useful in practice and has been developed recently. In this talk, we consider the HAP for the continuous wavelet transform. We show that every pair of admissible wavelets have the HAP in $L^2$ sense, while it is not true in general whenever pointwise convergence is considered. We give necessary and sufficient conditions for the pointwise HAP to hold, which depends on both wavelets and functions to be reconstructed.

Linear and nonlinear subdivision schemes in geometric modeling
Nira Dyn, Tel-Aviv University, Israel

Subdivision schemes are efficient computational methods for the design, representation and approximation of 2D and 3D curves, and of surfaces of arbitrary topology in 3D. Subdivision schemes generate curves/surfaces from
discrete data by repeated refinements. While these methods are simple to implement, their analysis is rather complicated.

The first part of the talk presents the ”classical” case of linear subdivi- sion schemes refining control points. It reviews mainly univariate schemes generating curves, their analysis, and their relation to the construction of
wavelets. Several well known schemes are discussed.

The second part of the talk presets three types of nonlinear subdivision schemes, which depend on the geometry of the data, and which are extensions of univariate linear schemes. The first two are schemes refining control points and generating curves. The last is a scheme refining curves in a geometry-dependent way, and generating surfaces.

An MRA approach to continuous function extension with emphasis on image inpainting
Charles Chui, University of Missouri, St. Louis, USA & Stanford University, USA

We will introduce a multi-resolution approximation (MRA) approach to the study of continuous function extension with applications to image inpainting and surface completion. Motivated by the anisotropic diffusion PDE model, we introduce the notions of data propagation and data extension. Together with the diffusion operators with Green’s functions as heat kernels, in the sense of Coifman and Lafon, we formulate the extension operators and derive their corresponding error formulas, from which sharp error bounds can be easily formulated. For the isotropic setting, our consideration is an extension of the smooth inpainting result of Tony Chan and Jackie Shen, in that analogous to Taylor’s polynomial expansion, a series expansion formula is derived with error term in the form of some “Peano integral”.

Dimensionality reduction of hyper-spectral image data
Charles Chui, University of Missouri, St. Louis, USA & Stanford University, USA

This is a joint work with Jianzhong Wang. My presentation is concerned with the problem of dimensionality reduction of hyperspectral image data of complex geospatial geometric structures. The image cubes under consideration are generally quite large, with over 100 bands and of at least 100,000-pixel resolution. It is well known that linear methods, such as principal component analysis (PCA) and multi-dimensional scaling (MDS), are not effective for the study of this problem, and current non-linear methods encounter various difficulties, particularly in neighborhood selection and data set tiling. Our approach to this problem is based on diffusion maps and diffusion wavelets. An important advantage of this approach is that the diffusion process can be easily applied to control the neighborhood size. In order to facilitate such diffusion processes, we will discuss certain neighborhood selection rules to address the choice of suitable neighbors and introduce a landmark technique to significantly reduce the diffusion kernel size for the need of memory saving and computational stability.

Bootstrap for empirical multifractal analysis
Patrice Abry, Ecoles normales supérieures de lyon, France

Multifractal Analysis is becoming a standard statistical signal processing tool, available in most up-to-date toolboxes. In practice, it mostly consists, in measuring scaling exponents and power law attributes from data.
Classically, practical multifractal analysis, i.e., multifractal formalism, is based on wavelet coefficients.
However, it has recently been shown that there are substantial theoretical benefits to base multifractal analysis on different multiresolution quantities, referred to as wavelet Leaders. It will be shown that wavelet leader based multifractal parameter estimations benefits from substantially improved statistical performance compared to those based on Wavlet leaders. An important practical limitations in multifractal analysis consists of the fact little can be devised theoretically with respect to the statistical performance of the estimation procedures or for hypothesis tests.

We devise here a wavelet domain bootstrap procedure and illustrate that it enables the operational derivation of accurate confidence intervals for the estimates of multifractal attributes. Also, the bootstrap procedure provides us with hypothesis tests procedure enabling to, for instance, practically distinguish between (finite variance) Gaussian self similar processes and multiplicative (Mandelbrot-type) cascades. The wavelet Leader based Multifractal analysis procedure will be illustrated on both (1D) signal and (2D) images.

Pseudo box splines
Song Li, Zhejiang University, China

We present a new family of refinable functions named pseudo box splines which generalizes univariate pseudo splines to multivariate setting. A complete analysis of pseudo box splines including stability, regularity and tight
wavelet frame with desired approximation order is given.

Visual quality evaluation - perceptual approach
Ee Ping Ong, Institute for Infocomm Research (I2R), Singapore

This talk will introduce the perceptual approach to visual quality evaluation - as opposed to objective PSNR-based methods. Then, a perceptual visual quality metric based on the characteristics of the human visual system will be introduced. This metric belongs to the class of full-reference approach and it will be shown that it performs better than PSNR on multimedia videos.

Some topological and geometric properties of refinable functions and MRA affine frames
Wai Shing Tang, National University of Singapore, Singapore

We investigate some topological and geometric properties of the set ${\mathcal R}$ of all refinable functions in $L^{2}(R^d)$, and of the set of all MRA affine frames. In particular, ${\mathcal R}$ is nowhere dense in $L^{2}(R^d)$; the unit sphere of ${\mathcal R}$ is path-connected in the $L^2$-norm; and for any $M$-dimensional hyperplane generated by $L^{2}$-functions $f_{0}, .., f_{M}$, either almost all the functions in the hyperplane are refinable or almost all the functions in the hyperplane are not refinable. Also, the set of all MRA affine frames is nowhere dense in $L^{2}(R^d)$, which is contrary to what many people may have thought that most of affine frames are MRA frames. We also discuss a new characterization of the $L^{2}$-closure
$\overline{\mathcal{R}}$ of ${\mathcal R}$, and extend the above topological and geometric results from ${\mathcal R}$ to $\overline{\mathcal R}$, and even further to the set of all refinable vectors and its $L^2$-closure.

Construction of vector valued wavelets
Eric Weber, Iowa State University, USA

Wavelets have demonstrated great power in a wide range of data processing applications. However, the most common application of wavelets is for the processing of images. Color images, as well as hyperspectral images, have several color bands, which can naturally be thought of as vector valued data. Therefore, to match the vector valued data of images, we consider vector valued wavelets.

We present a construction of vector valued wavelets, as well as associated vector valued filter banks. Our construction is based on orthogonal frames and the extension principles of Ron and Shen. We also will show that our construction is actually the only way to construct vector valued wavelets within an MRA setting. This is joint work with Brody Johnson.

Graph cuts for the multiphase Mumford-Shah model Using piecewise constant level set methods
Xuecheng Tai, Nanyang Technological University, Singapore

The piecewise constant level set method has previously been successfully used for the multiphase Mumford-Shah model. The resulting minimization problem can be solved by continuous optimization techniques such as the augmented lagrangian method. In this work, we instead propose an integer optimization technique to solve the multiphase Mumford-Shah functional represented by piecewise constant level set functions. This approach, which is
based on a cut on an appropriate graph, is very superior in terms of efficiency compared to the previous methods. Numerical experiments show that the new method produces the same quality of results.

This talk is based on a joint work with Egil Bae.

« Back...

Characterization and construction of monocomponent signals
Lihua Yang, Sun Yat-Sen University, China

This talk will discuss the characterization of monocomponent signals based on the Hp decompostion. A class of periodic analytic signals with positive instantaneous frequency are constructed.

« Back...

Extension principle for tight wavelet frames of periodic functions
Kok Ming Teo, National Institute of Education, Singapore

In this talk, we present a unitary extension principle for constructing normalized tight wavelet frames of periodic functions of one or higher dimensions. While the wavelets are nonstationary, the method much simplifies their construction by reducing it to a matrix extension problem that involves finite rows of complex numbers. With a constructive proof, necessary and sufficient conditions for a solution of the matrix extension problem are obtained. A complete characterization of all possible solutions is also provided. As an illustration, a parametric family of trigonometric polynomial tight wavelet frames is constructed. This is joint work with Say Song Goh.

« Back...

Dual pairs of Gabor frames
Ole Christensen, Technical University of Denmark, Denmark

Associated to a compactly supported function g for which the integer-translates form a partition of unity, we construct pairs of dual Gabor frames with good time-frequency properties. The construction yields actually a class of dual frames associated with the given frame generator, and it is possible to optimize the dual with respect to, e.g., minimal support, symmetry, and other constraints. For frames generated by compactly supported polynomials it has recently been shown that no compactly supported polynomial dual generator exists. Imposing an extra condition we show that one can use a certain B-spline as dual generator.

« Back...

Simultaneously inpainting in image and transformed domains
Lixin Shen, Syracuse University, USA

In this talk, we focus on the restoration of images that have incomplete data in either the image domain or the transformed domain or in both. We propose an iterative algorithm that can restore the incomplete data in both domains simultaneously. We prove the convergence of the algorithm and derive the optimal properties of its limit. Applications of the algorithm will be demonstrated.

« Back...

A hybrid of DCT and Haar wavelet for image denoising
Lixin Shen, Syracuse University, USA

The discrete cosine transform (DCT) is widely used in image/signal processing due to its asymptotical equivalence to the Karhunen-Loeve transform for Markov-1 signals. Haar wavelet transform for images/signals can give sparse representation of the local features (edges). In this talk, we will discuss the possibility of integrating DCT and Haar wavelet in image denosing.

« Back...

Uncertainty principles of concentration type and signal recovery
Say Song Goh, National University of Singapore, Singapore

The uncertainty principle of Donoho and Stark provides an inequality involving sets of concentration of a function and its Fourier transform. In this talk, we shall present a very general inequality of concentration type for operators on Hilbert spaces. Consequences of this inequality include the uncertainty principle of Donoho and Stark as well as uncertainty principles for Bessel sequences in Hilbert spaces and for integral operators between measure spaces. We also generalize to the setting of Hilbert spaces a related result of Donoho and Stark on stable recovery of truncated signals. The above is based on joint work with Tim N. T. Goodman. In addition, we will mention further studies on related concentration type inequalities and corresponding signal recovery results, which are applicable to the short-time Fourier transform, the continuous wavelet transform, and discrete wavelet coefficients.

« Back...

Limitations on motion processing for real-time systems: insights from visual illusions
Cornelia Fermüller, University of Maryland, USA

The visual stimulus used as basis for motion interpretation is the 2D image motion, and it is derived from the change of image patterns over time. Image motion is then processed to solve many tasks, such as controlling eye movements, tracking, segmenting the scene into surfaces, estimating 3D motion and scene reconstruction. It is well known that computing accurate image motion is challenging, and this is documented in the computational literature, which has proposed a plethora for algorithms for estimating it. The estimation of image motion is locally under-constrained. Thus smoothness assumptions have to be imposed to estimate it, but at the same time the locations where the scene is discontinuous have to be detected.

It is less known, however, that even at the locations where the scene is smooth, the estimation of image motion poses a problem. We will demonstrate two problems, which are inherent to image motion signals and thus effect biological as well as artificial motion systems. One is about the effect of noise in the image measurements, which has a serious effect on the estimation; it causes statistical bias. In other words, because it is not possible to accurately estimate the statistics of noise, the estimated value of image motion has systematic error. The other is, that because real time systems cannot use data from the future, they have to use asymmetric (causal) filters to process the change of the image signal over time. Such filters have problems with signals which are spatially asymmetric. The uncertainty in localizing signals in space and frequency causes image motion to be estimated incorrectly at some frequencies.

We hypothesize that the above two problems are the main cause for illusory motion experienced in a number of recently discovered patterns. The most popularized of these patterns are the ‘Ouchi illusion’ and the ‘Snake illusion’. Psychophysical experiments with variations of these illusory patterns demonstrate that the proposed models predict the illusory perception very well.

Joint work with Ji Hui, National University of Singapore.

« Back...

Theory and algorithms for anisotropic triangulations
Albert Cohen, Université Pierre et Marie Curie, France

Joint work with : Nira Dyn, Frederic Hecht and Jean-Marie Mirebeau

We present the first results of an ongoing project revolving around approximation by finite element functions on
adaptive and anisotropic triangulations. We first recall the available theory for isotropic triangulations which involves Besov spaces.

For anisotropic triangulations, we present an analytic criterion that governs the rate of convergence in $L^p$ norms for optimally built triangulations. We propose a greedy algorithm which has the ability to generate triangulations that exhibit a locally optimal aspect ratio and prove that the optimal convergence rate is met by the algorithm. We also present applications to image representation and compression.

« Back...

Multi-fractal signature for texture analysis
Hui Ji, National University of Singapore, Singapore

As a simple but powerful statistical measurement, histogram has been used for recognition task in a wide range of applications including texture classification and retrieval.
In this talk, we present how multi-fractal spectrum can be applied on texture analysis as a better "histogram" with better invariance to external environmental changes. By combining the global spatial invariance and local robust measurement on textures, the multi-fractal spectrum provides an extremely compact but powerful texture signature which capturing the essential structure of textures. Also, we will discuss our Psychophysical study on the fractal-like statistical inference in human vision system.

« Back...

Matrix extension and the construction of Wavelets
Xiaosheng Zhuang, University of Alberta, Canada

Wavelet analysis has been widely used in a broad range of scientific areas,such as signal denoising, image processing, computer graphics, numericalalgorithms and so on. In this talk, I will mainly focus on two problems: 'whatare wavelets?' and 'how to construct wavelets?'. My talk includes two parts.The first part is totally introductory, in which I will try to give a roughidea of wavelets and its applications. The second part of my talk is almostlinear algebraic, which is related to the so-called 'Unitary Extension Principle (UEP)' that is used to extend a given vector with Laurent polynomialentries to a matrix.

« Back...

Application of Wavelet-based signal threshold denoising to quantitative proteomics
Yuanyuan Chen, Zhejiang University, China

ASAPRatio program of Proteomics could evaluate protein abundance ratios and their associated errors, but it is possible that background noise is not removed completely with the Savitzky-Golay smooth filtering method. We propose a wavelet-based signal threshold de-noising method to replace the Savitzky-Golay smooth filtering method. Then, our comparative experimental results demonstrate that our method has potentials to remove possible noise embedded in MS data, which can lead to improve the ASAPRatio program.

« Back...

Iterative algorithms based on the decouple of deblurring and denoising for image restoration
Youwei Wen, National University of Singapore, Singapore

In this talk, we propose iterative algorithms for solving image restorationproblems. The iterative algorithms are based on the decouple of deblurringand denoising steps in the restoration process. In the deblurringstep, an efficient deblurring method using fast transforms can beemployed. In the denoising step, effective methods like wavelet shrinkagedenoising method or total variation denoising method can be used.The main advantage of this proposal is that the resulting algorithmscan be very efficient, and can produce better restored images in visualquality and signal-to-noise ratio than those by the restoration methodsusing the combination of a data-fitting term and a regularizationterm together. The convergence of the proposed algorithms is shownin the paper. Numerical examples are also given to demonstrate theeffectiveness of these algorithms.

« Back...

Variational PDE techniques in wavelet transforms and image processing
Haomin Zhou, Georgia Institute of Technology, USA

(Joint work with Prof. Tony Chan at UCLA)

It is well known that standard wavelet linear approximations (truncating high frequency coefficients) generate oscillations (Gibbs' phenomenon) near singularities in piecewise smooth functions. Nonlinear and data dependent methods are often used to overcome this problem. In the past decade, a new research trend has emerged, which introduces partial differential equation (PDE) and variational techniques (including techniques developed in computationalfluid dynamics (CFD)) into wavelet transforms for the same purpose.

In this talk, I will present an brief overview of our work along this direction. Two different approaches have been used. One is to use PDE ideas to directly change wavelet transform algorithms so as to generate wavelet coefficients which can avoid oscillations in reconstructions when the high frequency coefficients are truncated. The other one is to stay with standard wavelet transforms and use variational PDE techniques to modify the coefficients in the truncation process so that the oscillations are reduced in the reconstruction processes.

The first part will be on an adaptive ENO wavelet transform designed by using ideas from Essentially Non-Oscillatory (ENO) schemes for numerical shock capturing.
ENO-wavelet transforms retains the essential properties and advantages of standard wavelet transforms such as concentrating the energy to the low frequencies, obtaining arbitrary high order accuracy uniformly and having a multiresolution framework and fast algorithms, all without any edge artifacts. We have also shown the stability of the ENO-wavelet transform and obtained a rigorous approximation error bound which shows that the error in the ENO-wavelet approximation depends only on the size of the derivative of the function away from the discontinuities. The second part of the talk is on using a variational framework, in particular the minimization of total variation (TV), to select and modify the retained standard wavelet coefficients so that the reconstructed images have fewer oscillations near edges. Applications in image compression, denoising, wavelet inpainting will be mentioned.

« Back...

PDE models and Wavelet inpainting
Haomin Zhou, Georgia Institute of Technology, USA

In this talk, I will present variational models for the wavelet inpainting problem, which aims to filling in missing or damaged wavelet coefficients in image reconstruction. The problem is motaviated by error concealment in image processing and communications. And it is closely related to the classical image inpainting, with the difference being that the inpainting regions are in the wavelet domain. This brings new challenges to the reconstructions. The new variational models, especially total variation minimization in conjunction with wavelets lead to PDE's, in the wavelet domain and can be solved numerically. The proposed models have effective and automatic control over geometric features of the inpainted images including sharp edges, even in the presence of substantial loss of wavelet coefficients, including in the low frequencies. This work is jointly with Tony Chan (UCLA) and Jackie Shen (Barclays).

« Back...

Unitary extension principle: theory and applications
Zuowei Shen, National University of Singapore, Singapore

The unitary extension principle provides a great flexibility of designing tight frame wavelet filters that makes the construction of tight frame wavelets absolutely painless. In this talk, I will start by introducing the unitary extension principle that is followed by a brief review of some new theoretic development in the field based on or motivated by this principle.

In many practical problems in image processing, the observed data sets are often in complete in the sense that features of interest in the image are missing partially or corrupted by noise. The recovery of missing data from incomplete data is an essential part of any image processing procedures whether the final image is utilized for visual interpretation or for automatic analysis.

In the second part of this talk, I will discuss our new iterative algorithm for image recovery for missing data which is based on tight framelet systems constructed by the unitary extension principle. We consider in particular few main applications in image processing, inpainting, impulse noise removal, super-resolution image reconstruction and compressed senssing.

« Back...

Unitary extension principle: theory and applications
Zuowei Shen, National University of Singapore, Singapore

The unitary extension principle provides a great flexibility of designing tight frame wavelet filters that makes the construction of tight frame wavelets absolutely painless. In this talk, I will start by introducing the unitary extension principle that is followed by a brief review of some new theoretic development in the field based on or motivated by this principle.

In many practical problems in image processing, the observed data sets are often in complete in the sense that features of interest in the image are missing partially or corrupted by noise. The recovery of missing data from incomplete data is an essential part of any image processing procedures whether the final image is utilized for visual interpretation or for automatic analysis. In this talk, we will discuss our new iterative algorithm for image recovery for missing data which is based on tight framelet systems constructed by the unitary extension principle. We consider in particular few main applications in image processing, inpainting, impulse noise removal, super-resolution image reconstruction and compressed senssing.

« Back...

Deconvolution: a Wavelet Frame approach
Zuowei Shen, National University of Singapore, Singapore

This talk devotes to deconvolution algorithms based on wavelet frame approach. I start by introducing algorithms used in high resolution image reconstruction. Then, a complete formulation of deconvolution in terms of multiresolution analysis is given. This formulation converts the deconvolution process to the filling of missing wavelet frame coefficients. The missing wavelet frame coefficients are recovered iteratively together with a built-in denoising scheme to remove noises in the data set, so that the noise will not blow up while iterating. This approach has already been used efficiently in solving various problems in high resolution image reconstructions. Frame based deconvolution by applying a newly developed compressed sensing algorithm is also discussed.

« Back...

Wavelets, discriminant analysis and regularization for image analysis
Dao-Qing Dai, Sun Yat-Sen University, China

We shall present techniques in terms of wavelets and regularization for two problems in image analysis: face recognition and superresolution image restoration. Our recent results will be introduced.

« Back...

Quasi-projection operators in Besov spaces: approximation, decomposition, and applications
Rong-Qing Jia, University of Alberta, Canada

This talk is concerned with quasi-projection operators in Besov spaces. We give sharp estimates for approximation by quasi-projection operators in Besov spaces. We also investigate multi-level decompositions induced by quasi-projection operators. Finally, various applications of quasi-projection operators are discussed. In particular, we demonstrate applications of quasi-projection operators to the multigrid method and the wavelet method in the study of numerical solutions of partial differential equations.

Dimensionality reduction of hyper-spectral image data
Charles Chui, University of Missouri, St. Louis, USA & Stanford University, USA

This is a joint work with Jianzhong Wang. My presentation is concerned with the problem of dimensionality reduction of hyperspectral image data of complex geospatial geometric structures. The image cubes under consideration are generally quite large, with over 100 bands and of at least 100,000-pixel resolution. It is well known that linear methods, such as principal component analysis (PCA) and multi-dimensional scaling (MDS), are not effective for the study of this problem, and current non-linear methods encounter various difficulties, particularly in neighborhood selection and data set tiling. Our approach to this problem is based on diffusion maps and diffusion wavelets. An important advantage of this approach is that the diffusion process can be easily applied to control the neighborhood size. In order to facilitate such diffusion processes, we will discuss certain neighborhood selection rules to address the choice of suitable neighbors and introduce a landmark technique to significantly reduce the diffusion kernel size for the need of memory saving and computational stability.

Sampling signals with finite rate of innovation: theory and applications
Pina Marziliano, Nanyang Technological University, Singapore

Joint work with Martin Vetterli, Thierry Blu, Pier-Luigi Dragotti

The key issue in all digital systems is the applicability of a sampling and reconstruction scenario. The main questions are: Can the signal be sampled, what is the minimum sampling rate, how easy is it to reconstruct the signal and what is the computational complexity? In this talk, a sampling theory for a class of signals that have a finite number of degrees of freedom per unit of time which is defined as the rate of innovation will be presented. Examples of such signals are stream of Dirac pulses (e.g. Poisson process), bilevel signals and piecewise polynomials which are not band-limited and the classical (Nyquist-Kotelnikov-Whittaker-Shannon) sampling theorem is not applicable. It will be shown that these signals can be sampled uniformly at (or above) the rate of innovation using an appropriate sampling kernel and then can be perfectly reconstructed by algebraic methods.

Much research has evolved from this framework and has led others to further develop a sampling theory for FRI signals with noise, multidimensional signals with FRI and with different sampling kernels such as Gaussians, splines and wavelets. The diverse set of applications that have been investigated so far include: channel and timing estimation for UWB systems, modeling and compression of ECG and EEG signals, high-resolution deconvolution of optical coherence tomography, super-resolution of images.

A study of an interactive image segmentation model and a variant of the Chan-Vese image segmentation model for overlapping objects with additive intensity
Andy M. Yip, National University of Singapore, Singapore

In the first part of the talk, I will present some theoretical and numerical studies of an interactive image segmentation model proposed by Guan and Qiu. We show that the bilateral constraints used by Guan and Qiu are automatically satisfied by the solution of the corresponding unconstrained problem. We demonstrate numerically that domain decomposition preconditioners are quite effective for solving the optimality condition. In the second part of the talk, I will present a variant of the Chan-Vese image segmentation model which is capable of segmenting overlapping objects whose intensity at the intersection is approximately given by the sum of the intensity level of the two objects. Applications of such a model include X-ray and multichannel microscopy images.

Harmonic and multiscale analysis of and on data sets in high-dimensions
Mauro Maggioni, Duke University, USA

In many applications one is faced with the task of analyzing large amounts of data, typically embedded in high-dimensional space, but with a lower effective dimensionality, due to physical or statistical constraints. We are interested in studying the geometry of such data sets, modeled as noisy manifolds or graphs, in particular in estimating its intrinsic dimensionality and finding intrinsic coordinate systems on the data. We discuss recent results in these directions, where eigenfunctions of a Laplacian on the data or the associated heat kernel can be used to introduce coordinates with provable guarantees on their bi-Lipschitz distortion. We also discuss ways of studying, fitting, denoising and regularizing functions defined on the data, by using Fourier or a wavelet-like multiscale analysis on the data. We present applications to nonlinear image denoising, semisupervised learning on a family of benchmark datasets, and, if time permits, to Markov decision processes.

Multifractal analysis techniques for image classification
Stéphane Jaffard, Université Paris XII, France

Multifractal analysis yields new classification tools in signal and image processing, which are of non-parametric type. They can be interpreted as the determination of regularity indices of the image in new scales of function spaces, They are based on the properties of the wavelet decomposition of measures, and functions. The statisical method used is a non-parameirc method: the bootsrap. We will discuss the mathematical fondations of thi theory, and show applications to natural mages.

Polyphase geometry
Wayne M. Lawton, National University of Singapore, Singapore

We show how polyphase representations in multirate signal processing, wavelet filter design, and computational geometry employ vector and matrix valued “functions” (usually having trigonometric polynomial entries) on a circle or torus. Then we explain why it is necessary to replace “functions” by sections of appropriate vector bundles in order to capture the polyphase geometry. We use this geometry to explain the multiscale (Cantor spectrum) property of almost Schrodinger operators and summarize preliminary work on open problems in quantum chaos.

Scattered data reconstruction by regularization in B-spline and associated wavelet spaces
Yuhong Xu, National University of Singapore, Singapore

Curve/surface reconstruction from scattered noisy data has a wide range of applications. Under the regularized least square framework, we present a reconstruction method that uses the approximation space spanned by dilated shifts of a cardinal B-spline function. We provide an error analysis for the method and then give an efficient algorithm by transferring the computation from B-spline domain to wavelet domain. This method is computationally more favorable than the classical thin-plate spline method when we are dealing with large data sets. We also discuss edge-preserving reconstruction which is attained by using L1 based regularization.

Tight frame from Pseudo-splines
Yi Shen, Zhejiang University, China

Pseudo-splines of type I and II provide a rich family of refinable functions with B-splines, interpolatory refinable functions and refinable functions with orthonormal shifts as special examples. In this talk, we give an analysis of some important properties of tight frame generated from pseudo-splines via the unitary extension principle.

Vector refinement equations with infinitely supported
Jianbin Yang, Zhejiang University, China

In this talk, we investigate the L2-solutions of vector refinement equations with exponentially decaying masks and a general dilation matrix. A necessary and sufficient condition for the convergence of the corresponding vector cascade algorithm in L2 is given. As an application, we also characterize biorthogonal multiple refinable functions.

Restoration of images with rotated shapes
Simon Setzer, University of Mannheim, Germany

Image restoration methods which respect important features such as edges play a fundamental role in digital image processing. However, in the presence of strong noise, many edge-preserving methods tend to round off corners. To avoid this for images which contain rotated (linearly transformed) rectangular shapes we present novel variational and diffusion techniques.

Dual-tree complex Wavelet and its application on face recognition
Chaochun Liu, Sun Yat-sen University, China

Although discrete wavelet transform (DWT) has been successfully used for pattern recognition, it suffers from two crucial shortcomings: shift variance and lack of directionality. Fortunately, the dual-tree complex wavelet transform seems to be a good solution to overcome the shortcomings. In the lecture, we first give an introduction of dual-tree complex wavelet, then we focus on its application on face recognition---discriminant complex-WT-face system.

« Back...

Remove camera shake from image gradients
Hui Ji, National University of Singapore, Singapore

Camera shake during exposure often leads to objectionable blurred images. Conventional blind deconvolution methods typically assume camera motion with uniform velocity, which is often over-simplified model for real camera shake. We introduce a direct method to remove the blurring effects of camera shake from degraded images. The method assume a uniform camera blur with varied velocity and we show how to infer the blur kernel from the image gradients field.

« Back...

New frontiers for scientific computing: Visual effects and real time
Joseph Teran, University of California, Los Angeles, USA

As computers get faster and architectures evolve, simulation of the dynamics of natural phenomena is becoming an increasingly indispensable tool for creating virtual worlds in movie special effects, video games and even in medicine. For example, nearly all companies involved in effects for movies and video games have a team dedicated to simulation based dynamics of water, fire, smoke, explosions, rigid body dynamics and deformable body dynamics for cloth etc. Although classically considered too involved and prohibitively computationally burdensome for applications like movie special effects, simulation of such phenomena is now much more practical on a moderately powerful pc. Also, the bar has been raised so high for realism in these industries that simulating the physics of such phenomena is necessary to produce effects at the state of the art.

The governing equations for such phenomena come by and large from classical physics and are most often in the form of a system of partial differential equations. The development of algorithms for solving such equations with the computer is one of the cornerstones of applied mathematics and in this talk I will give an overview of how applied mathematics and scientific computing can be used for these exciting new frontiers of application.

« Back...