Upload
others
View
0
Download
0
Embed Size (px)
Citation preview
Posters:
Fixed-point algorithm based on proximity operator for image denoising
Yizun LIN
Abstract:
ROF total-variation model is one of the most popular models for image
processing. In the model, the denoised image is the proximity operator of the total-
variation evaluated at a given noisy image. The total-variation can be viewed as the
composition of a convex function (the 1-norm for the anisotropic total-variation or
the 2-norm for the isotropic total-variation) with the fist-order difference operator.
In order to solve the minimization problem of the ROF total-variation model, we
introduce some definitions such as subdifferential and proximity operator, then
prove many theorems and lemmas about them. In addition, we provide a
characterization in respect to the subdifferential and proximity operator of convex
functions. This characterization plays a crucial role in the development of our
algorithm to slove the ROF model.
We then investigate the proximity operator of the composition of a convex
function with a linear transformation. Consider that the operator of 1-norm and 2-
norm is easy to be solved, we can use the characterization about the subdifferential
and proximity operator as well as the chain rule of subdifferential to transform the
problem about the proximity operator of the composition of a convex function with a
linear transformation into a fixed-point problem. Since the operator which we want
to compute its fixed-point is nonexpansive, we can get the convergence of the Picard
iterations. Specialize the fixed-point methodology to the total-variation model, we
finally develop our fixed-point algorithm based on the proximity operator.
Key word:total-variation, subdifferential, proximity operator, fixed-point
An idea to improve the result of ROF model by sharpen kernel
convolution
Hao LIU
Abstract:
When an image is corrupted by Gaussian noise with large standard deviation, it
is hard to get good result by only adopting ROF model to recover it. We introduce an
idea to improve the result by using ROF model and sharpen kernel convolution
together. We can get some improvement in the result and a higher PSNR value.?
Linearized alternating direction method for a convex model of
materialidentification in hyperspectral image
Yoko OKUDA
Abstract:
Hyperspectral imaging sensors record up to several hundred
differentfrequencies and hyperspectral image is used for material identification.
Material identification in hyperspectral image is mathematically solved as a non-
negative matrix factorization. In 2011, a new model to solve this problem as a convex
optimization is proposed by E. Esser et al. This model is solved by alternating
direction method of multipliers (ADMM). In 2011, linearized alternating direction
method (ADM) is proposed by R. Chan et al. and this method reduces computation
cost. We apply linearized ADM to the model of this problem to reduce its
computation cost. In this talk, the model of this problem from non-negative matrix
factorization to convex optimization and the difference of experimental results
between solving by ADMM and linearized ADM are discussed.
Determination of regularization parameter in estimation of the Robin
coefficient in the Laplace equation
Chao WANG
Abstract:
We consider the inverse problem of determining the Robin coefficient by using
measuring data from the accessible part of the boundary, which is nonlinear and ill-
posed. Two regularization methods, namely, the Tikhonov regularization and the H1
regularization, are considered. We propose a Gauss-Newton method to solve the
regularized nonlinear least square problem and a way to choose a suitable
regularization parameter based on the normalized cumulative periodogram.
Numerical results show that these methods are efficient and competitive.
Lax-Friedrichs fast sweeping method for solving eikonal equations on
surfaces
Ka-Wah WONG
Abstract:
Eikonal equation has many applications in computer vision, optimal control and
other areas. There are plenty nice methods for solving eikonal equations on
rectangular grids but there are relatively few for solving it on the surfaces. In here,
we propose a simple surface eikonal solver which is found to be both moderately
accurate and computational efficient.
A Recursive Algorithm for Multi-frequency Acoustic Inverse Source
Problems.
Boxi XU
Abstract:
We investigate an iterative/recursive algorithm for recovering unknownsources
of acoustic field with multi-frequency measurement data. Under some additional
regularity assumptions on the source functions, rigorous convergence analysis is
presented assuming the background medium is homogeneous and the measurement
data is noise-free. Error bound estimate is also provided when the observation data
is contaminated by some noise. Numerical illustrations verify the reliability and
efficiency of our proposed algorithm.
Patch-based Inpainting Using Adaptive Dictionary by ADMM
Yu YANG
Abstract:
Image inpainting desires to fill in the data in missing area using the information
from the observed region of an image. In this work, we introduce a novel patch-
based inpainting model and algorithm using adaptive dictionary under alternating
direction method of multipliers (ADMM) optimization framework. The optimization
model for linear combination coefficients of similar patches is regularized by `2 norm
and is efficient solved by proposed ADMM-based algorithm. A metric measuring
similarity between two patches is proposed by Laplace probability distribution of
coefficients of DCT tight frame system and is more robust to distinguish difference of
patch than sum of squared differences by `2 norm. The patch-based dictionary can
be adaptively established by proposed metric from the nonlocal data and is similar to
target patch. Inpainting processing order of each patch is determined by improved
patch sparsity. The results show that the proposed patch-based image inpainting
algorithm is efficient in interpolating large missing region and provides more
plausible from the points of visual effect.
The Regularization Parameter
Meipeng ZHI
Abstract:
There are three parameters of the proximity algorithm. We discuss how to
choose these parameters in order to make the algorithm become more efficient.
Moreover, the most important is the regularization parameter. The choice of this
parameter effects the balance between removing the noise and preserving the signal
content. We introduce the relationship between the regularization parameter and
the ROF model. Finally, we try to propose a new method to solve the ROF model.
Oral Presentation
A Convex Variational Model for Restoring Blurred Images with Rician
Noise.
Liyuan CHEN
Abstract:
In this presentation, a new convex variational model for restoring images
degraded by blur and Rician noise is proposed. The new method is inspired by
previous works in which the non-convex variational model obtained by maximum a
posteriori (MAP) estimation has been presented. Based on the statistical property of
Rician noise, we put forward to adding an additional data-fidelity term into the non-
convex model, which leads to a new strictly convex model under mild condition. Due
to the convexity, the solution of the new model is unique and independent of the
initialization of the algorithm. We utilize a primal-dual algorithm to solve the model.
Numerical results are presented in the end to demonstrate that with respect to
image restoration capability and CPU-time consumption, our model outperforms
some of the state-of-the-art models in both medical and natural images.
A novel cell nuclei segmentation method for 3D C. elegans embryonic
time-lapse images
Long CHEN
Abstract:
Recently a series of algorithms have been developed, providing automatic tools
for tracing C. elegans embryonic cell lineage. In these algorithms, 3D images
collected from a confocal laser scanning microscope were processed, the output of
which is cell lineage with cell division history and cell positions with time. However,
current image segmentation algorithms suffer from high error rate especially after
350-cell stage because of low signal-noise ratio as well as low resolution along the Z
axis (0.5-1 microns). As a result, correction of the errors becomes a huge burden. So
we proposed a new type of nuclei segmentation method embracing an bi-directional
prediction procedure, which can greatly reduce the number of false negative errors,
the most common errors in the previous segmentation. The result of this research
demonstrates the advantages of the bi-directional prediction method in the nuclei
segmentation over that of existing method (StarryNite/MatLab StarryNite). Our
method could be an efficient tool for the analysis of high-throughput large C. elegans
microscopy image data sets.
Optimized Conformal Parameterization with Controllable Area
Distortions
Ka Chun LAM
Abstract:
Parameterization, a process of mapping a complicated domain onto a simple
parameter domain, is important in various fields such as computer graphics, medical
imaging and numerical computation. Conformal parameterization has been widely
used since it preserves the local geometry well. However, a major drawback is that
conformal parameterization often introduces area distortions, which leads to
problems in some applications such as texture mapping. It is therefore desirable to
obtain a parameterization that balances between conformality and area distortions.
In this work, we propose a variational algorithm to compute the optimized conformal
parameterization with controllable area distortions. The distribution of area
distortions can be prescribed by users according to the applications. The main idea is
to minimize a combined energy functional involving the Beltrami coefficient and
Jacobian terms, which are used to control the conformality and area distortions
respectiviely. Soft or hard landmark constraints can also be incorporated into the
model. Experiments have been carried out on real surface data. Results demonstrate
the efficacy of the proposed model to obtain an optimized parameterization that
preserves both local geometry and area distortion as good as possible.
Comparison of 3 algorithms for computation of Teichmuller extremal
maps
Tsz Chun YAM
Abstract:
Conformal maps, which are angle preserving diffeomorphisms, have been
widely applied in geometry processing, such as surface registration, texture mapping
and remeshing. However, when landmark constraints between the surfaces are
imposed, obtaining conformal map may not be faesible. In this case, quasi-conformal
maps, which allow controlled conformal distortion, will be considered instead.
Several schemes have been proposed for computation of quasi-conformal maps by
minimizing certain type of energy. Lipman et al. proposed a concept of bounded
distortion space and minimize the energy functional in its convex partition by
Quadratic programming or conic programming. Weber et al. and Lui et al. proposed
some computation schemes of a special kind of quasi-conformal maps, called
Teichmuller extremal maps, in a completely different manner. In this presentation,
we will briefly introduce their mathematical models and compare their performance
in different experiments.
A new fruit recognition method based on multiple features fusion
Hulin KUANG
Abstract:
Fruit recognition, a technique which automatically recognizes classes of fruits in
an image, has widely applications in automating fruit harvest machine, supermarket
and grocery store, children education and health monitoring system in mobile phone.
In this presentation, a new fruit recognition method based on multiple features
fusion is present. A large and complex fruit dataset which contains 20 classes of fruit
is built. Five features including simple shape, color, Local Binary Pattern (LBP),
Histogram of Oriented Gradients (HOG) and LBP feature based on magnitude of
Gabor feature (named GaborLBP) are combined. The five features are complement
with each other. An optimal feature parameters and multiple feature fusion
selection framework is utilized. The optimal feature parameters are selected by
learning and cross validation on the training samples. The optimal multiple features
fusion is selected by fruit recognition accuracy. In addition, machine learning
algorithm also influences the recognition accuracy. Therefore, four machine learning
algorithms including Support Vector Machine (SVM), Multi-class Adaboost, Artificial
Neural Network (ANN) and K nearest neighbors (KNN) are tested to select the
optimal combination of feature fusion and machine learning algorithm. When our
method is compared with other fruit recognition algorithms using the same dataset,
it is demonstrated that our proposed method achieves the highest classification
accuracy at 89.4% for 20 classes of fruits.
Detecting Alzheimer Disease Patients' Brain Structures with Quasi-
conformal Method.
Hanfang LI
Abstract:
In my presentation, I mainly focus on using the quasi-conformal method in
comparing the brain structures between normal people and Alzheimer Disease
patients. This method can be highly accurate and time-saving when doing the
analysis. We use the MRI scans of many patients in doing surface reconstructions. By
finding out the structure difference among patients, we can assist the neuro-
scientists in observing the main atrophic structure and thus can further develop
some cure methods in dealing with this disease.
Effective noise–suppressed and artifact-reduced reconstruction of
SPECT data using a preconditioned alternating projection algorithm
Si LI
Abstract:
We have recently developed a Preconditioned Alternating Projection Algorithm
(PAPA) with total variation (TV) regularizer for solving the penalized maximum
likelihood optimization model for SPECT reconstruction. This algorithm belongs to a
novel class of fixed-point proximity methods. The goal of this work is to investigate
how PAPA performs while dealing with realistic noisy SPECT data, to compare its
performance with more conventional algorithms, and to address issues with TV
artifacts by proposing a novel form of the algorithm invoking high-order (HO) TV
regularization, denoted as PAPA-HOTV. For high-noise simulated SPECT data, PAPA-
HOTV significantly outperforms several conventional methods in terms of “hot”
lesion detectability, noise suppression, and computational efficiency, with only
limited loss of local spatial resolution. Unlike TV-type methods, PAPA-HOTV does not
create sizable staircase artifacts. PAPA-HOTV shows significant promise for clinically
useful reconstructions of low-dose SPECT data. Therefore, it offers an approach to
the important need of reducing radiation dose to patients in selected nuclear
medicine studies.
Color Image Segmentation by Minimal Surface Smoothing
Zhi LI
Abstract:
In this paper, we propose a two-stage approach for color image segmentation,
which is inspired by minimal surface smoothing. Indeed, the first stage is to find a
smooth solution to a convex variational model related to minimal surface smoothing.
The classical primal-dual algorithm can be applied to efficiently solve the
minimization problem. Once the smoothed image $u$ is obtained, in the second
stage, the segmentation is done by thresholding. Here, instead of using the classical
K-means to find the thresholds, we propose a hill-climbing procedure to find the
peaks on the histogram of $u$, which can be used to determine the required
thresholds. The benefit of such approach is that it is more stable and can find the
number of segments automatically. Finally, the experiment results illustrate that the
proposed algorithm is very robust to noise and exhibits superior performance for
color image segmentation.
Brief introduction to fMRI
Hongwu LIN
Abstract:
We use our brains everyday but know only a little about them. FMRI providesus
a powerful tool to investigate into our brains. A short introduction of brain and fMRI
are given first in section 1, then some mature mathematical methods and a mixed
model in section 2.
High Resolution Image Deblurring with Displacement Errors by using
envl1/TV Model
Wenting LONG
Abstract:
High resolution image reconstruction arises in many applications, such as re-
mote sensing, surveillance, and medical imaging. It refers to the reconstruc- tion of
high-resolution image from multiple low-resolution, shifted, degraded samples of a
true image. Displacement error is inevitable during the capture of low resolution
image. The goal for this work is to address the specific issue of high resolution image
reconstruction with displacement error by propos- ing a novel model invoking the
Moreau envelop, denoted by envl1 /TV, which has been studied and explored
extensively in the present work, to compare its performance with classic L2-TV
deblurring model while dealing with realistic noisy data by proposing a fixed point
algorithm, whose convergence condition has been given in the meantime, and a
adaptive stratedy has been given for better PSNR appearance and faster convergence
rate. Two metrics have been carefully selected to quantify the simulation result
which are often used in medical image processing, and it shows that envl1 /TV model
is more effective in noise-suppressed and artifact-reduced.
Fast Video Compression Algorithms and Real-Time encoder
implementations
Biao MIN
Abstract:
To satisfy the demands of transmission and storage of HD (High Definition) and
UHD (Ultra High Definition) videos, a lot of video compression techniques have been
proposed in recent decades, and the standardized video compression frameworks
are widely applied in various fields. HEVC (High Efficient Video Coding) standard,
released in 2013, is considered to be highly progressive in coding efficiency. The
advanced novelties include quad-tree based block partitioning, refined intra
prediction angles, advanced motion vector prediction, fractional samples
interpolation, block merging, in-loop de-block filtering and adaptive offset. Under the
equal perceptual quality, these new techniques can reduce 40%~50% bitrate over
previous standards and other proposals, while the encoder is also expected to be
several times more complex.
This talk gives some fast algorithms that can be embedded into HEVC
framework to reduce the coding complexity, while the coding efficiency is retained.
The proposed methodologies include fast algorithms for intra prediction and inter
prediction in HEVC. The fast intra prediction algorithm is based on the proposed edge
complexity, and the fast inter algorithm uses the motion information of spatial and
temporal neighbors as reference. In this talk, the hardware architecture is also
presented to discuss how the encoder can be implemented efficiently on hardware
platforms.
Image reconstruction under non-Gaussian noise
Federica SCIACCHITAO
Abstract:
Digital images are often subject to a variety of distortions during the acquisition,
processing, compression, storage, transmission and reproduction. One of the most
important task of mathematical image processing is to reconstruct the original image
from the blurred and degraded image. During the years, the additive white Gaussian
noise has been widely studied, since it is the most simple and tractable noise.
However, since Gaussian noise does not exist in the real application, other kinds of
noise have been introduced. In this talk, we focus on the impulse and Cauchy noise.
Based on total variation, we propose two variational methods for recovering blurred
images corrupted with impulse noise and Cauchy noise. Numerical results show the
potential of the proposed methods comparing with the existing methods.
Quasi-conformal parameterizations for multiply-connected domains
Kin Tat HO
Abstract:
In this talk, I am going to present a method to compute the Quasi-conformal
parameterization (QCMC) for a multiply-connected 2D domain or surface. QCMC
computes a Quasi-conformal map from a multiply-connected domain S onto a
punctured disk Ds associated with a given Beltrami differential. The Beltrami
differential, which measures the conformality distortion, is a complex-valued
function with supremum norm strictly less than 1. Every Beltrami differential gives
a conformal structure of S. Hence, the conformal module of Ds, which are the radii
and centers of the inner circles, can be fully determined by, up to a Möbius
transformation. In our project, we proposed an iterative algorithm to simultaneously
search for the conformal module and the optimal Quasi-conformal parameterization.
The key idea is to minimize the Beltrami energy subject to a suitable boundary
constraints. The optimal solution is our desired Quasi-conformal parameterization
onto a punctured disk. The parameterization of the multiply-connected domain
simplifies numerical computations and has important applications in various fields,
such as in computer graphics and vision. Experiments have been carried out on
synthetic data together with real multiply-connected Riemann surfaces. Results show
that our proposed method can efficiently compute Quasi-conformal
parameterizations of multiply-connected domains and outperforms other state-of-
the-art algorithms. Applications of the proposed parameterization technique have
also been explored.
FLASH: Fast Landmark Aligned Spherical Harmonic Parameterization for
Genus-0 Closed Brain Surfaces
Pui Tung CHOI
Abstract:
Surface registration between cortical surfaces is crucial in medical imaging for
performing systematic comparisons between brains. Landmark-matching registration
that matches anatomical features, called the sulcal landmarks, is often required, to
obtain a meaningful 1-1 correspondence between brain surfaces. This is commonly
done by parameterizing the surface onto a simple parameter domain, such as the
unit sphere, in which the sulcal landmarks are consistently aligned. Landmark-
matching surface registration can then be obtained from the landmark aligned
parameterizations. For genus-0 closed brain surfaces, the optimized spherical
harmonic parameterization, which aligns landmarks to consistent locations on the
sphere, has been widely used. This approach is limited by the loss of bijectivity under
large deformations and the slow computation. In this work, a fast algorithm (called
FLASH) to compute the optimized spherical harmonic parameterization with
consistent landmark alignment is proposed. This is achieved by formulating the
optimization problem to ${overline{mathbb{C}}}$ and thereby linearizing the
problem. Errors introduced near the pole are corrected using quasi-conformal
theories. Also, by adjusting the Beltrami differential of the mapping, a diffeomorphic
(1-1, onto) spherical parameterization can be effectively obtained. The proposed
algorithm has been tested on 38 human brain surfaces. Experimental results
demonstrate that the computation of the landmark aligned spherical harmonic
parameterization is significantly speeded up using the proposed algorithm.
A Fast Sweeping Method for Computing the Geodesic Distance Map on
Manifolds Represented by the Grid Based Particle Method
Meng WANG
Abstract:
We improve the discretization of Laplace-Beltrami operator defined on the
interface represented by the Grid Based Particle Method (GBPM). Based on this
discretization, we develop a simple iterative scheme to invert the operator on closed
manifolds. As an interesting application, we propose a fast sweeping method for
solving eikonal equations on surfaces. Our analysis and experiments have shown that
the new discretized Laplace-Beltrami operator is nearly diagonally dominant. When
incorporating with a fast sweeping method to obtain the viscosity solution of eikonal
equation on surfaces, the corresponding iterative matrix has spectral radius less than
one.
TEMPO: Teichmuller Extremal Mapping via Point-cloud Optimization
Ting Wei MENG
Abstract:
When solving registration problem, it is important to find conformal mapping
between surfaces. However, with extra constraints (such as landmark constraints)
enforced, conformal mappings generally do not exist. This motivates us to look for
Teichmuller extremal mapping, which satisfies the required constraints while
minimizing the maximal conformality distortion. In this presentation, I will show a
method to compute Teichmuller extremal mapping from surfaces represented by
point cloud to the unit disk with boundary fixed, which can be used for point cloud
registration. The basic idea is to represent the set of diffeomorphisms using Beltrami
coefficients (BCs), and look for an optimal BC associated to the desired Teichmuller
mapping. For the point cloud case, Beltrami coefficients (BCs) is calculated by using
moving least-squares method, so that we can use the Linear Beltrami Solver(LBS) on
point cloud to reconstruct the associated diffeomorphism from the optimal BC.
Therefore, the Teichmuller extremal mapping on point cloud can be calculated from
Quasi-conformal (QC) iterative algorithm.
Staircasing effect: an experimental view
Zhifeng WU
Abstract:
The total variation regularization method has greatly influenced the imaging
science since L. Rudin and S. Osher introduced the ROF model in the 1990s. The
advantage of total variation consists in preserving edges. However, total variation
regularization tends to make the obtained image cartoonlike, which is called
staircasing effect and is a major drawback of this regularization method. A lot of
effort has been put into modifying the total variation model in order to reduce the
staircasing effect. Two approaches, namely the infimal convolution approach
proposed by A. Chambolle and P. Lions and total generalized variation approach
proposed by K. Bredies and T. Pock, have drew a lot of attention. In this brief talk, I
am glad to show some images reconstructed by adopting the infimal convolution
approach, and analyze the change of staircasing effect when the total variation is
replaced with infimal convolution. Besides, I will try to give an introductory analysis
of the infimal convolution model itself.
A numerical embedding method for solving PDEs on general
geometries
Ningchen YING
Abstract:
In this talk, we will give a method for solving PDEs on surfaces in mathbb{R}^N.
Different with general method for solving PDEs on whole domain, we add a extra
term to the equations which makes the resulting method approximate as solving
PDEs on surface. We illustrate the numerical convergence result for some general
model problem, and also figure out its application to some complex equations.
Multiple feature iterative hashing
Lifang ZHANG
Abstract:
With the increase of the amount of data and data dimension, classification and
query of the data has become increasingly important. In order to retrieve video,
images, text better, hash methods have emerged in recent years .There have been
many good hash algorithms which have been widely applied in pattern recognition,
machine learning because of its high speed and its adaptability for high - dimensional
data. This paper proposes a new method about multiple feature hash, called multiple
feature iterative hashing (MFIH).The method considers the compact hash code of the
data on a single feature, also considers the impact of relationships between features
on hash codes. Moreover, we get optimal hash codes with iterative quantization.
Experimentsshow thatour methodachieves better efficiencythanother three
hashmethods of single feature.
Video Background and Foreground Modelling
Rui ZHAO
Abstract:
In video analysis, the image foreground detection and background modelling
have many applications on image processing and computer vision. This problem is
challenging if the background is moving or there are so many frames of images with
high resolution in the video. Our aim is to find an efficient way to model the
background then to extract the foreground from the video. Our model can be divided
into two stages: a) Find the static background by maximizing the histogram. 2)
Modeling the background to be the solution a trust region problem with constrains
related to the signal itself and its variations. Practical examples show that our model
is effective and reliable and much quicker.
A Dictionary-Based Algorithm for Dimensionality Reduction and Data
Reconstruction
Zhong ZHAO
Abstract:
Nonlinear dimensionality reduction (DR) is a basic problem in manifold learning.
However, many DR algorithms cannot deal with the out-of-sample extension
problem and thus cannot be used in large-scale DR problem. Furthermore, many DR
algorithms only consider how to reduce the dimensionality but seldom involve with
how to reconstruct the original high dimensional data from the low dimensional
embeddings (i.e. data reconstruction problem). In this paper, we propose a
dictionary-based algorithm to deal with the out-of-sample extension problem for
large-scale DR task. In this algorithm, we train a high dimensional dictionary and a
low dimensional dictionary corresponding to the high dimensional data and their low
dimensional embedding respectively. With these two dictionaries, dimensionality
reduction and data reconstruction can be easily conducted by coding the input data
point over one dictionary, and then use the code to predict the output data point
over another dictionary. Compared to the existing DR algorithms, our algorithm has
high efficiency since analytic solution is derived. Besides, our reconstruction
algorithm can be applied to many DR algorithms to make them have the ability to
perform data reconstruction. Experiments on synthetic datasets and real world
datasets show that, for both dimensionality reduction and data reconstruction, our
algorithm is accurate and fast.
Schedule
October 31 Poster Presentation
8:55 p.m.
--9:40 p.m.
Yizun LIN
Fixed-point algorithm based on proximity operator for image denoising
Hao LIU
An idea to improve the result of ROF model by sharpen kernel
convolution
Yoko OKUDA
Linearized alternating direction method for a convex model of material
identification in hyperspectral image
Chao WANG
Determination of regularization parameter inestimation of the Robin
coefficient in the Laplaceequation
Ka Wah WONG
Lax-Friedrichs fast sweeping method for solving eikonal equations on
surfaces
Boxi XU
A Recursive Algorithm for Multi-frequency Acoustic Inverse Source
Problems
Yu YANG
Patch-based Inpainting Using Adaptive Dictionaryby ADMM
Meipeng ZHI
The Regularization Parameter
November 1 Oral Presentation
9:00 a.m.
Liyuan CHEN
A Convex Variational Model for Restoring Blurred Images with Rician
Noise.
9:12 a.m.
Long CHEN
A novel cell nuclei segmentation method for 3D C. elegans embryonic
time-lapse images
9:24 a.m.
Ka Chun LAM
Optimized Conformal Parameterization with Controllable Area Distortions
9:36 a.m.
Tsz Chun YAM
Comparison of 3 algorithms for computation of Teichmuller extremal
maps
9:48 a.m.
Hulin KUANG
A new fruit recognition method based on multiple features fusion
10:00 a.m.
Hanfang LI
Detecting Alzheimer Disease Patients' Brain Structures with Quasi-
conformal Method
10:12 a.m.
Si Li
Effective noise–suppressed and artifact-reduced reconstruction of
SPECT data using a preconditioned alternating projection algorithm
10:24 a.m. Tea Break
11:00 a.m.
Zhi LI
Color Image Segmentation by Minimal Surface Smoothing
11:12 a.m.
Hongwu LIN
A brief introduction to fMRI
11:24 a.m.
Wenting LONG
High Resolution Image Deblurring with Displacement Errors by using
envl1/TV Model
11:36 a.m. Biao MIN
Fast Video Compression Algorithms and Real-Time encoder
implementations
11:48 A.m. End
November 2 Oral Presentation
\9:00 a.m.
Federica SCIACCHIANO
Image reconstruction under non-Gaussian noise
9:12 a.m.
Tat Kin HO
Quasi-conformal parameterizations for multiply-connected domains
9:24 a.m.
Pui Tung CHOI
FLASH: Fast Landmark Aligned Spherical Harmonic Parameterization for
Genus-0 Closed Brain Surfaces
9:48 a.m.
Meng WANG
A Fast Sweeping Method for Computing the Geodesic Distance Map on
Manifolds Represented by the Grid Based Particle Method
10:00 a.m.
Ting Wei MENG
TEMPO: Teichmuller Extremal Mapping via Point-cloud Optimization
10:12 a.m Tea Break
10:45 a.m
Zhifeng WU
Staircasing effect: an experimental view
10:57 a.m
Ningchen YING
A numerical embedding method for solving PDEs on general geometries
11:09 a.m.
Lifang ZHANG
Multiple feature iterative hashing
11:21 a.m.
Rui ZHAO
Video Background and Foreground Modelling
11:33 a.m.
Zhong ZHAO
A Dictionary-Based Algorithm for Dimensionality Reduction and Data
Reconstruction
11:57 a.m. End