9
Research Article Neural Network for Sparse Reconstruction Qingfa Li, 1 Yaqiu Liu, 1 and Liangkuan Zhu 2 1 School of Information and Computer Engineering, Northeast Forestry University, No. 26, Hexing Street, Harbin 150040, China 2 College of Electromechanical Engineering, Northeast Forestry University, No. 26, Hexing Street, Harbin 150040, China Correspondence should be addressed to Yaqiu Liu; [email protected] Received 24 December 2013; Accepted 4 March 2014; Published 31 March 2014 Academic Editor: Huaiqin Wu Copyright © 2014 Qingfa Li et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We construct a neural network based on smoothing approximation techniques and projected gradient method to solve a kind of sparse reconstruction problems. Neural network can be implemented by circuits and can be seen as an important method for solving optimization problems, especially large scale problems. Smoothing approximation is an efficient technique for solving nonsmooth optimization problems. We combine these two techniques to overcome the difficulties of the choices of the step size in discrete algorithms and the item in the set-valued map of differential inclusion. In theory, the proposed network can converge to the optimal solution set of the given problem. Furthermore, some numerical experiments show the effectiveness of the proposed network in this paper. 1. Introduction Sparse reconstruction is the term used to describe the process of extracting some underlying original source signals from a number of observed mixture signals, where the mixing model is either unknown or the knowledge about the mixing process is limited. e problem of recovering a sparse signal from noisy linear observation arises in many real world sensing applications [15]. Mathematically, a signal recovery problem can be formulated as estimating the original signal based on noisy linear observations, which can be expressed as = + , (1) where R × is the mixing matrix, R is the original signal, R is the observed signal, and R is the noise. In many cases, is a matrix of block Toeplitz with Toeplitz blocks (BTTB) when zero boundary conditions are applied and block Toeplitz-plus-Hankel with Toeplitz- plus-Hankel blocks (BTHTHB) when Neumann boundary conditions are used [6]. en, this problem can be viewed as a linear inverse problem. A standard approach to solve linear inverse problems is to define a suitable objective function and to minimize it. It is oſten divided into two steps to solve this problem, which are estimation of mixture matrix and recovery of original signal . In this paper, we focus on the study of the second step, where we assume that we have known mixture matrix . Generally, finding a solution with few nonzero entries for an underdetermined linear system with noise is oſten modeled as the regularization problem: min ‖ − ‖ 2 + ‖‖ 0 , (2) where >0 and ‖‖ 0 is defined by the number of nonzero entries in . However, the 2 0 regularized problem (2) is difficult to deal with because of the discrete structure of the 0 norm, which derives researchers to pay attention to the continuous 2 1 minimization problem: min ‖ − ‖ 2 + ‖‖ 1 . (3) e first term in (3) is oſten called as date fitting term, which forces the solutions of (3) closeness to the data, and the second term in it is oſten called as regularization or potential term, which is to push the solutions to exhibit some prior expected features. Under certain conditions, 2 1 problem and 2 0 problem have the same solution sets [7]. e 2 1 problem is a continuous convex optimization problem and can be efficiently solved, which is known as Lasso [8]. Hindawi Publishing Corporation Mathematical Problems in Engineering Volume 2014, Article ID 107620, 8 pages http://dx.doi.org/10.1155/2014/107620

Research Article Neural Network for Sparse Reconstructiondownloads.hindawi.com/journals/mpe/2014/107620.pdfa class of nonsmooth convex optimization problems. In [ ], a neural network

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Research Article Neural Network for Sparse Reconstructiondownloads.hindawi.com/journals/mpe/2014/107620.pdfa class of nonsmooth convex optimization problems. In [ ], a neural network

Research ArticleNeural Network for Sparse Reconstruction

Qingfa Li1 Yaqiu Liu1 and Liangkuan Zhu2

1 School of Information and Computer Engineering Northeast Forestry University No 26 Hexing Street Harbin 150040 China2 College of Electromechanical Engineering Northeast Forestry University No 26 Hexing Street Harbin 150040 China

Correspondence should be addressed to Yaqiu Liu yaqiuliugmailcom

Received 24 December 2013 Accepted 4 March 2014 Published 31 March 2014

Academic Editor Huaiqin Wu

Copyright copy 2014 Qingfa Li et alThis is an open access article distributed under theCreative CommonsAttribution License whichpermits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

We construct a neural network based on smoothing approximation techniques and projected gradient method to solve a kind ofsparse reconstruction problemsNeural network can be implemented by circuits and can be seen as an importantmethod for solvingoptimization problems especially large scale problems Smoothing approximation is an efficient technique for solving nonsmoothoptimization problems We combine these two techniques to overcome the difficulties of the choices of the step size in discretealgorithms and the item in the set-valuedmap of differential inclusion In theory the proposed network can converge to the optimalsolution set of the given problem Furthermore some numerical experiments show the effectiveness of the proposed network inthis paper

1 Introduction

Sparse reconstruction is the term used to describe the processof extracting some underlying original source signals from anumber of observedmixture signals where themixingmodelis either unknown or the knowledge about themixing processis limited The problem of recovering a sparse signal fromnoisy linear observation arises in many real world sensingapplications [1ndash5] Mathematically a signal recovery problemcan be formulated as estimating the original signal based onnoisy linear observations which can be expressed as

119887 = 119860119909 + 120578 (1)

where 119860 isin R119898times119899 is the mixing matrix 119909 isin R119899 is theoriginal signal 119887 isin R119898 is the observed signal and 120578 isin R119898

is the noise In many cases 119860 is a matrix of block Toeplitzwith Toeplitz blocks (BTTB) when zero boundary conditionsare applied and block Toeplitz-plus-Hankel with Toeplitz-plus-Hankel blocks (BTHTHB) when Neumann boundaryconditions are used [6] Then this problem can be viewed asa linear inverse problem A standard approach to solve linearinverse problems is to define a suitable objective functionand to minimize it It is often divided into two steps tosolve this problem which are estimation of mixture matrix119860 and recovery of original signal 119909 In this paper we focus on

the study of the second step where we assume that we haveknown mixture matrix 119860

Generally finding a solution with few nonzero entriesfor an underdetermined linear system with noise is oftenmodeled as the regularization problem

min 119860119909 minus 1198872

+ 1205821199090 (2)

where 120582 gt 0 and 1199090is defined by the number of nonzero

entries in 119909 However the 1198972minus 1198970regularized problem (2) is

difficult to deal with because of the discrete structure of the1198970norm which derives researchers to pay attention to the

continuous 1198972minus 1198971minimization problem

min 119860119909 minus 1198872

+ 1205821199091 (3)

The first term in (3) is often called as date fitting term whichforces the solutions of (3) closeness to the data and thesecond term in it is often called as regularization or potentialterm which is to push the solutions to exhibit some priorexpected features Under certain conditions 119897

2minus 1198971problem

and 1198972minus 1198970problem have the same solution sets [7] The 119897

2minus 1198971

problem is a continuous convex optimization problem andcan be efficiently solved which is known as Lasso [8]

Hindawi Publishing CorporationMathematical Problems in EngineeringVolume 2014 Article ID 107620 8 pageshttpdxdoiorg1011552014107620

2 Mathematical Problems in Engineering

A class of signal recovery problems can be formulated as

min 119860119909 minus 1198872

+ 1205821198631199091

st 119909 isin Ω

(4)

where119863 is a linear operator 120582 is the regularization parameterthat controls the trade-off between the regularization termand the data-fitting term and constraint set Ω is a closedconvex subset of R119899

Optimization problems arise in a variety of scientificand engineering applications and they really need real timesolutions Since the computing time greatly depends on thedimension and the structure of the optimization problemsnumerical algorithms are usually less effective in large scaleor real time optimization problems In many applicationsreal time optimal solutions are usually imperative such ason-board signal processing and robot motion planning andcontrol One promising approach to handle these problems isto employ artificial neural network During recent decadesneural dynamical method for solving optimization problemshas been a major area in neural network research based oncircuit implementation [9ndash12] First the structure of a neuralnetwork can be implemented physically by designated hard-ware such as application-specific integrated circuits where thecomputational procedure is distributed and parallel This letsthe neural network approach solve optimization problems inrunning time at the order of magnitude much faster thanconventional optimization algorithms executed on general-purpose digital computers Second neural networks can solvemany optimization problems with time-varying parametersThird the dynamical and ODE techniques can be appliedto the continuous-time neural networks And recent reportshave shown that the global convergence can be obtained bythe neural network approach under some weaker conditions

Since the neural network was first proposed for solvinglinear [13 14] and nonlinear [15] programming problemsmany researchers were inspired to develop neural networksfor optimization Many types of neural networks have beenproposed to solve various optimization problems for exam-ple the recurrent neural network the Lagrangian networkthe deterministic annealing network the projection-typeneural network a generalized neural network and so forthIn [16] Chong et al proposed a neural network for linearprogramming problem with finite time convergence In [17]a generalized neural network was presented for solvinga class of nonsmooth convex optimization problems In[18] a neural network was defined by using the penaltyfunction method and differential inclusion for solving aclass of nonsmooth convex optimization problems In factin many important applications neural network built by adifferential inclusion is an important method to solve a classof nonsmooth optimization problems One has to mentionthat the optimization problems are not differentiable in manyimportant applications Moreover the neural networks forsmooth optimization problems required the gradients of theobjective and constrained functions in such neural networksSo these networks cannot solve nonsmooth optimizationproblems Using smoothing techniques in neural networkis an effective method for solving nonsmooth optimization

problems [19 20] The main feature of smoothing method isto approximate the nonsmooth functions by parameterizedsmooth functions [21 22] By smoothing approximations wecan give a class of smooth functions which converge to theoriginal nonsmooth function and whose gradients convergeto the subgradient of nonsmooth function For solving manyconstrained optimization problems projection is a simpleand effective method for solving the constraints In [23 24]projection had been used in neural networks for solving somekind of constrained optimization problems

Basing on the advantages of the neural networks inthis paper we will propose a neural network and use somemathematical techniques to solve optimization problem (4)The problem (4) is nonsmooth Many neural networks aremodeled by differential inclusions which have the difficultyin the choice of the right set-valued map In this paperwe will introduce a smoothing function to overcome thisproblem Using smoothing techniques into neural network isan interesting and promise method for solving (4)

NotationThroughout this paper sdot denotes the 1198972norm and

sdot 1denotes the 119897

1norm

2 Preliminary Results

In this section we will introduce several basic definitions andlemmas which are used in the development

Definition 1 Suppose that 119891 is Lipschitz near 119909 the general-ized directional derivative of 119891 at 119909 in the direction V isin R119899 isgiven by

1198910

(119909 V) = lim sup119910rarr119909119903rarr0

+

119891 (119910 + 119903V) minus 119891 (119910)

119903 (5)

Furthermore the Clarke generalized gradient of 119891 at 119909 isdefined as

120597119891 (119909) = 120585 isin R119899

1198910

(119909 V) ge ⟨V 120585⟩ forallV isin R119899

(6)

Moreover if119891 R119899 rarr R is a convex function then it hasthe following properties as well

Proposition 2 If 119891 R119899 rarr R is a convex function thefollowing property holds

119891 (119909) minus 119891 (119910) le ⟨119901 119909 minus 119910⟩ forall119909 119910 isin R119899

forall119901 isin 120597119891 (119909)

(7)

Since the constraint set of (4) is a closed convex subsetof R119899 then we use the projection operator to handle theconstraint The projection operator of 119909 to the closed convexsubset Ω is defined by

119875Ω(119909) = argmin

119906isinΩ

119906 minus 119909 (8)

The projection operator has the following properties

Proposition 3 Consider the following

⟨V minus 119875Ω(V) 119875Ω(V) minus 119906⟩ ge 0 forallV isin R

119899

119906 isin Ω

1003817100381710038171003817119875Ω (119906) minus 119875Ω(V)1003817100381710038171003817 le 119906 minus V forall119906 V isin R

119899

(9)

Mathematical Problems in Engineering 3

Definition 4 Let ℎ R119899 rarr R be a locally Lipschitz functionWe call ℎ R119899 times [0 +infin) rarr R a smoothing function of ℎ ifℎ satisfies the following conditions

(i) For any fixed 120583 gt 0 ℎ(sdot 120583) is continuously differ-entiable in R119899 and for any fixed 119909 isin R119899 ℎ(119909 sdot) isdifferentiable in [0 +infin)

(ii) For any fixed 119909 isin R119899 lim120583darr0

ℎ(119909 120583) = ℎ(119909)(iii) There is a positive constant 120581

ℎgt 0 such that

10038161003816100381610038161003816nabla120583ℎ (119909 120583)

10038161003816100381610038161003816le 120581ℎ forall120583 isin [0 +infin) 119909 isin R

119899

(10)

(iv) lim119911rarr119909120583darr0

nabla119911ℎ(119911 120583) sube 120597ℎ(119909)

From the above definition we can get that for any fixed119909 isin R119899

lim119911rarr119909120583darr0

ℎ (119911 120583) = ℎ (119909)

10038161003816100381610038161003816ℎ (119909 120583) minus ℎ (119909)

10038161003816100381610038161003816le 120581ℎ120583 forall120583 isin [0 +infin) 119909 isin R

119899

(11)

Next we present a smoothing function of the absolutevalue function which is defined by

120593 (119910 120583) =

10038161003816100381610038161199101003816100381610038161003816 if 1003816100381610038161003816119910

1003816100381610038161003816 ge 120583

1199102

2120583+120583

2if 1003816100381610038161003816119910

1003816100381610038161003816 lt 120583(12)

Proposition 5 (see [21]) Consider the following(i) 120593(119910 120583) is continuously differentiable about 119910 in R for

any fixed 120583 gt 0 and differentiable about 120583 for any fixed119910 isin R

(ii) 0 le nabla120583120593(119910 120583) le 1 for all 119910 isin R for all 120583 isin (0 1]

(iii) 0 le 120593(119910 120583)minus |119910| le 1205832 for all 119910 isin R for all 120583 isin (0 1](iv) 120593(119910 120583) is convex about 119910 for any fixed 120583 and

lim120583darr0

nabla119910120593(119910 120583) sube 120597|119910|

3 Theoretical Results

In (4) 119863 isin R119898times119899 can be rewritten as 119863 = (1198891 1198892 119889

119898)119879

where 119889119894(119894 = 1 2 119898) is an 119899 dimensional vector Then

(4) can be rewritten as

min 119860119909 minus 1198872

+ 120582

119898

sum

119894=1

10038161003816100381610038161003816119889119879

11989411990910038161003816100381610038161003816

st 119909 isin Ω

(13)

In the following we use sum119898

119894=1120593(119889119879

119894119909 120583) to approximate

sum119898

119894=1|119889119879

119894119909| From the idea of the projected gradient method

we construct our neural network as follows

(119905) = minus119909 (119905) + 119875Ω[119909 (119905) minus 2119860

119879

(119860119909 (119905) minus 119887)

minus120582

119898

sum

119894=1

nabla119909120593 (119889119879

119894119909 (119905) 120583 (119905))]

(14)

where 119909(0) = 1199090 120583(119905) = 119890

minus119905 and 119875Ωis the projection operator

onΩNext we will give some analysis on the proposed neural

network (14)

Theorem 6 For any initial point 1199090isin Ω there is a global and

uniformly bounded solution of (14)

Proof The right hand of (14) is continuous about 119909 and 119905then there is a local solution of (14) with 119909

0isin Ω And we

assume that [0 119879) is themaximal existence interval of 119905 Firstwe prove that 119909(119905) isin Ω for all 119905 isin [0 119879) Obviously (14) canbe rewritten as

(119905) + 119909 (119905) = 119875Ω[119909 (119905) minus 2119860

119879

(119860119909 (119905) minus 119887)

minus120582

119898

sum

119894=1

nabla119909120593 (119889119879

119894119909 (119905) 120583 (119905))]

(15)

From the integration about the above differential equa-tion we have

119909 (119905) = 119890minus119905

1199090+ (1 minus 119890

119905

)int

119905

0

119896 (119904)119890119904

119890119905 minus 1119889119904 (16)

where 119896(119905) = 119875Ω[119909(119905) minus 2119860

119879

(119860119909(119905) minus 119887) minus 120582sum119898

119894=1nabla119909120593(119889119879

119894119909(119905)

120583(119905))]Since int119905

0

(119890119904

(119890119905

minus 1))119889119904 = 1 1199090isin Ω and Ω is a closed

convex subset we confirm that

119909 (119905) isin Ω forall119905 isin [0 119879) (17)

Differentiating 119860119909(119905) minus 1198872

+120582sum119898

119894=1120593(119889119879

119894119909(119905) 120583(119905)) along

this solution of (14) we obtain

119889

119889119905[119860119909 (119905) minus 119887

2

+ 120582

119898

sum

119894=1

120593 (119889119879

119894119909 (119905) 120583 (119905))]

= ⟨2119860119879

(119860119909 (119905) minus 119887) (119905)⟩

+ 120582

119898

sum

119894=1

[⟨nabla119909120593 (119889119879

119894119909 (119905) 120583 (119905)) (119905)⟩

+nabla120583120593 (119889119879

119894119909 (119905) 120583 (119905)) (119905)]

le ⟨2119860119879

(119860119909 (119905) minus 119887) + 120582

119898

sum

119894=1

nabla119909120593 (119889119879

119894119909 (119905) 120583 (119905)) (119905)⟩

(18)

Using the inequality of project operator to (14) we obtain

⟨2119860119879

(119860119909 (119905) minus 119887) + 120582

119898

sum

119894=1

nabla119909120593 (119889119879

119894119909 (119905) 120583 (119905)) (119905)⟩

le minus (119905)2

(19)

4 Mathematical Problems in Engineering

Thus

119889

119889119905[119860119909 (119905) minus 119887

2

+ 120582

119898

sum

119894=1

120593 (119889119879

119894119909 (119905) 120583 (119905))] le minus (119905)

2

(20)

which follows that 119860119909(119905) minus 1198872

+ 120582sum119898

119894=1120593(119889119879

119894119909(119905) 120583(119905)) is

nonincreasing along the solution of (14) On the other handby Proposition 5 we know that

119860119909(0) minus 1198872

+ 120582

119898

sum

119894=1

120593 (119889119879

119894119909 (0) 120583 (0))

ge 119860119909 (119905) minus 1198872

+ 120582

119898

sum

119894=1

120593 (119889119879

119894119909 (119905) 120583 (119905))

ge 119860119909 (119905) minus 1198872

+ 120582119863119909 (119905)1

(21)

Thus 119909(119905) is bounded on [0 119879) Using the extension the-orem the solution of (14) is globally existent and uniformlybounded

Theorem 7 For any initial point 1199090isin Ω the solution of (14)

is unique and satisfies the following

(i) (119905) is nonincreasing on [0 +infin) andlim119905rarr+infin

(119905) = 0(ii) the solution of (14) is convergent to the optimal solution

set of (4)

Proof Suppose that there exist two solutions 119909 [0infin) rarr

R119899 and 119910 [0infin) rarr R119899 of (14) with initial point 1199090= 1199100

which means that

(119905) = minus119909 (119905) + 119875Ω[119909 (119905) minus 2119860

119879

(119860119909 (119905) minus 119887)

minus120582

119898

sum

119894=1

nabla119909120593 (119889119879

119894119909 (119905) 120583 (119905))]

119910 (119905) = minus119910 (119905) + 119875Ω[119910 (119905) minus 2119860

119879

(119860119909 (119905) minus 119887)

minus120582

119898

sum

119894=1

nabla119910120593 (119889119879

119894119910 (119905) 120583 (119905))]

(22)

Thus

119889

119889119905

1

2

1003817100381710038171003817119909(119905) minus 119910 (119905)10038171003817100381710038172

= ⟨119909 (119905) minus 119910 (119905) (119905) minus 119910 (119905)⟩

= minus1003817100381710038171003817119909 (119905) minus 119910 (119905)

10038171003817100381710038172

+ ⟨119909 (119905) minus 119910 (119905) 119875Ω[1205851(119905)] minus 119875

Ω[1205852(119905)]⟩

(23)

where 1205851(119905) = 119909(119905)minus2119860

119879

(119860119909(119905)minus119887)minus120582sum119898

119894=1nabla119909120593(119889119879

119894119909(119905) 120583(119905))

and 1205852(119905) = 119910(119905) minus 2119860

119879

(119860119909(119905) minus 119887) minus 120582sum119898

119894=1nabla119910120593(119889119879

119894119910(119905) 120583(119905))

From the expression of119875Ωand 119909(119905) 119910(119905) isin Ω for all 119905 ge 0

we get

⟨1205851(119905) minus 119875

Ω[1205851(119905)] 119909 (119905) minus 119910 (119905)⟩ = 0 forall119905 ge 0

⟨1205852(119905) minus 119875

Ω[1205852(119905)] 119909 (119905) minus 119910 (119905)⟩ = 0 forall119905 ge 0

(24)

Thus we have

⟨119909 (119905) minus 119910 (119905) 119875Ω[1205851(119905)] minus 119875

Ω[1205852(119905)]⟩

= ⟨119909 (119905) minus 119910 (119905) 1205851(119905) minus 120585

2(119905)⟩

=1003817100381710038171003817119909 (119905) minus 119910 (119905)

10038171003817100381710038172

minus 21003817100381710038171003817119860 (119909 (119905) minus 119910 (119905))

10038171003817100381710038172

minus 120582

119898

sum

119894=1

⟨119909 (119905) minus 119910 (119905) nabla119909120593 (119889119879

119894119909 (119905) 120583 (119905))

minusnabla119910120593 (119889119879

119894119910 (119905) 120583 (119905))⟩

(25)

Since120593(119889119879119894119910 120583) (119894 = 1 2 119898) is convex about119910 for any

fixed 120583 we have

⟨119909 minus 119910 nabla119909120593 (119889119879

119894119909 120583) minus nabla

119910120593 (119889119879

119894119910 120583)⟩ ge 0 forall119909 119910 isin R

119899

(26)

Thus for all 119905 ge 0119898

sum

119894=1

⟨119909 (119905) minus 119910 (119905) nabla119909120593 (119889119879

119894119909 (119905) 120583 (119905))

minusnabla119910120593 (119889119879

119894119910 (119905) 120583 (119905)) ⟩ ge 0

(27)

Using (25) and (27) into (23) we have

119889

119889119905

1

2

1003817100381710038171003817119909(119905) minus 119910(119910)10038171003817100381710038172

= minus21003817100381710038171003817119860(119909(119905) minus 119910(119905))

10038171003817100381710038172

minus⟨119909 (119905) minus 119910 (119905) 120582

119898

sum

119894=1

[nabla119909120593 (119889119879

119894119909 (119905) 120583 (119905))

minusnabla119910120593 (119889119879

119894119910 (119905) 120583 (119905))]⟩ le 0

forall119905 isin [0 +infin)

(28)

By integrating (28) from 0 to 119905 we derive that

sup119905ge0

1003817100381710038171003817119909 (119905) minus 119910 (119905)1003817100381710038171003817 le

10038171003817100381710038171199090 minus 1199100

1003817100381710038171003817 (29)

Therefore 119909(119905) = 119910(119905) for all 119905 ge 0 when 1199090= 1199100 which

derives the uniqueness of the solution of (14)Let 119910(119905) = 119909(119905 + ℎ) where ℎ gt 0 (29) implies that

119909 (119905 + ℎ) minus 119909 (119905) le 119909 (ℎ) minus 119909 (0) forall119905 ge 0 (30)

Therefore 119905 rarr (119905) is nonincreasing

Mathematical Problems in Engineering 5

From (20) we obtain that 119860119909(119905) minus 1198872

+120582sum119898

119894=1120593(119889119879

119894119909(119905)

120583(119905)) is nonincreasing and bounded form below on [0 +infin)therefore we have that

lim119905rarr+infin

[119860119909 (119905) minus 1198872

+ 120582

119898

sum

119894=1

120593 (119889119879

119894119909 (119905) 120583 (119905))] exists (31)

Using Proposition 5 to the above result we obtain that

lim119905rarr+infin

[119860119909 (119905) minus 1198872

+ 120582119863119909 (119905)1] exists (32)

Moreover we have that

lim119905rarr+infin

119889

119889119905[119860119909 (119905) minus 119887

2

+ 120582

119898

sum

119894=1

120593 (119889119879

119894119909 (119905) 120583 (119905))] = 0 (33)

Combining (20) and (33) we confirm that

lim119905rarr+infin

(119905) = 0 (34)

Since 119909(119905) is uniformly bounded on the global intervalthere is a cluster point of it denoted as 119909lowast which follows thatthere exists an increasing sequence 119905

119899such that

lim119899rarr+infin

119905119899= +infin lim

119899rarr+infin

119909 (119905119899) = 119909lowast

(35)

Using the expression of (14) and lim119905rarr+infin

(119905) = 0 wehave

119909lowast

= 119875Ω[119909lowast

minus lim119899rarr+infin

(2119860119879

(119860119909 (119905119899) minus 119887)

+120582

119898

sum

119894=1

nabla119909120593 (119889119879

119894119909 (119905119899) 120583 (119905

119899)))]

(36)

Using Proposition 3 in the above equation we have

⟨ lim119899rarr+infin

(2119860119879

(119860119909 (119905119899) minus 119887) + 120582

119898

sum

119894=1

nabla119909120593 (119889119879

119894119909 (119905119899) 120583 (119905

119899)))

V minus 119909lowast

⟩ ge 0 forallV isin Ω

(37)

From Proposition 5 there exists 120585 isin 120597[119860119909(119905) minus 1198872

+

120582119863119909(119905)1] such that

⟨120585 V minus 119909lowast

⟩ ge 0 forallV isin Ω (38)

Therefore 119909lowast is a Clarke stationary point of (4) Since (4)is a convex programming 119909lowast is an optimal solution of (4)Owning to the random cluster point of 119909(119905) we know that

any cluster point of 119909(119905) is an optimal solution of (4) whichmeans that the solution of (14) converges to the optimalsolution set of (4)

4 Numerical Experiments

In this section we will give two numerical experiments tovalidate the theoretical results obtained in this paper and thegood performance of the proposed neural network in solvingthe sparse reconstruction problems

Example 1 In this experiment we will test an experiment forthe signal recovered with noise Every original signal withsparsity 1 means that there is only one sound at time pointWe use the followingMATLAB codes to generate a 100 lengthoriginal signal 119909 isin R5times100 mixingmatrix119860 isin R5times4 observedsignal 119887 isin R4times100 and noise 119899 isin R4times100

s=zeros(5100)

for l=1100

q=randperm(5)s(q(12)l) = (2lowastrandn(21))

end

A=randn(35)

n = 005 lowast randn(3100)

b=Alowasts-n

We denote 119904lowast as the recovered signal using our methodFigures 1(a)-2(a) show the original observed and recoveredsignals using (14) Figure 2(b) presents the convergence ofsignal-to-noise ratio (SNR) along the solution of the pro-posed neural network From this figure we see that ourmethod recovers this random original effectively And weshould state that the SNR of the recovered signal is 2215 dBwhere

SNR =

119871

sum

119894=1

minus1

11987120lg(

1003817100381710038171003817119904lowast

(119897) minus 119904 (119897)10038171003817100381710038172

119904 (119897)2

) (39)

Example 2 In this experiment we perform the proposednetwork (14) on the restoration of 20 times 20 circle image Theobserved image is distorted from the unknown true imagemainly by two factors the blurring and the randomnoiseTheblurring is a 2119863 Gaussian function

ℎ (119894 119895) = 119890minus2(1198943)

2minus2(1198953)

2

(40)

which is truncated such that the function has a supportof 7 times 7 A Gaussian noise with zero mean and standardderivation of 005 dB is added to the blurred image Figures3(a) and 3(b) present the original and the observed imagesrespectively The peak signal-to-noise ratio (PSNR) of theobserved image is 1687 dB Denote 119909

119900and 119909

119887as the original

6 Mathematical Problems in Engineering

0 20 40 60 80 100

0

10

minus10

0 20 40 60 80 100

0

10

minus10

0 20 40 60 80 100

0

10

minus10

0

20 40 60 80 1000

5

minus5

(a)

0

5

10

0

5

0 20 40 60 80 100

0 20 40 60 80 100

0 20 40 60 80 100minus5

minus5

0

5

minus5

(b)

Figure 1 (a) Original signals (b) observed signals

0 20 40 60 80 100

0

10

minus10

0 20 40 60 80 100

0

10

minus10

0 20 40 60 80 100

0

10

minus10

0

20 40 60 80 1000

5

minus5

(a)

0 20 40 60 80 1005

10

15

20

25

30

35

40

45

(b)

Figure 2 (a) Recovered signals (b) the convergence of SNR(119909(119905))

and corresponding observed images and use the PSNR toevaluate the quality of the restored image that is

PSNR (119909) = minus10log10

1003817100381710038171003817119909 minus 119909119900

1003817100381710038171003817

20 times 20 (41)

We use problem (13) to recover this image where we let120582 = 0017 Ω = 119909 0 le 119909 le 119890 and

119863 = (1198711otimes 119868

119868 otimes 1198711

) with 1198711= (

1 minus1

1 minus1

d d1 minus1

) (42)

Choose 1199090= 119875Ω(119887) The recovered image by (14) with

1199090is figured in Figure 3(c) with PSNR = 1965 dB The

convergence of the objective value and PSNR(119909(119905)) along thesolution of (14) with initial point 119909

0is presented in Figures

4(a) and 4(b) From Figures 4(a) and 4(b) we find that theobjective value is monotonely decreasing and the PSNR ismonotonely increasing along the solution of (14)

5 Conclusion

Basing on the smoothing approximation technique and pro-jected gradient method we construct a neural network mod-eled by a differential equation to solve a class of constrainednonsmooth convex optimization problems which have wideapplications in sparse reconstruction The proposed networkhas a unique and bounded solution with any initial point in

Mathematical Problems in Engineering 7

2 4 6 8 10 12 14 16 18 20

2

4

6

8

10

12

14

16

18

20

(a)

2 4 6 8 10 12 14 16 18 20

2

4

6

8

10

12

14

16

18

20

(b)

2 4 6 8 10 12 14 16 18 20

2

4

6

8

10

12

14

16

18

20

(c)

Figure 3 (a) Original image (b) observed image (c) recovered image

0 10 20 30 40 50 60 70 80

192

21

11

12

13

14

15

16

17

18

f(x(t))

t

(a)

0 10 20 30 40 50 60 70 8017

175

18

185

19

195

20

205

PSNT(x(t))

t

(b)

Figure 4 (a) Convergence of the objective value (b) convergence of PSNR(119909(119905))

8 Mathematical Problems in Engineering

the feasible region Moreover the solution of proposed net-work converges to the solution set of the optimization prob-lem Simulation results on numerical examples are elaboratedupon to substantiate the effectiveness and performance of theneural network

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

Theauthors would like to thank the Editor-in-Chief ProfessorHuaiqin Wu and the three anonymous reviewers for theirinsightful and constructive comments which help to enrichthe content and improve the presentation of the results in thispaper This work is supported by the Fundamental ResearchFunds for the Central Universities (DL12EB04) and NationalNatural Science Foundation of China (31370565)

References

[1] Y C Eldar and M Mishali ldquoRobust recovery of signalsfrom a structured union of subspacesrdquo IEEE Transactions onInformation Theory vol 55 no 11 pp 5302ndash5316 2009

[2] R Saab O Yilmaz M J McKeown and R AbugharbiehldquoUnderdetermined anechoic blind source separation via l

119902-

basis-pursuit With qlt 1rdquo IEEE Transactions on Signal Process-ing vol 55 no 8 pp 4004ndash4017 2007

[3] D L Donoho ldquoCompressed sensingrdquo IEEE Transactions onInformation Theory vol 52 no 4 pp 1289ndash1306 2006

[4] L B Montefusco D Lazzaro and S Papi ldquoNonlinear filteringfor sparse signal recovery from incomplete measurementsrdquoIEEE Transactions on Signal Processing vol 57 no 7 pp 2494ndash2502 2009

[5] Y Xiang S K Ng andV KNguyen ldquoBlind separation ofmutu-ally correlated sources using precodersrdquo IEEE Transactions onNeural Networks vol 21 no 1 pp 82ndash90 2010

[6] M K Ng R H Chan and W-C Tang ldquoFast algorithm fordeblurringmodels with Neumann boundary conditionsrdquo SIAMJournal on Scientific Computing vol 21 no 3 pp 851ndash866 1999

[7] D L Donoho ldquoNeighborly polytopes and sparse solutions ofunderdetermined linear equationsrdquo Tech Rep Department ofStatistics Stanford University Standford Calif USA 2005

[8] R Tibshirani ldquoRegression shrinkage and selection via the lassordquoJournal of the Royal Statistical Society B vol 58 pp 267ndash2881996

[9] A Cichocki and R Unbehauen Neural Networks for Optimiza-tion and Signal Processing Wiley London UK 1993

[10] W Bian and X Xue ldquoSubgradient-based neural networks fornonsmooth nonconvex optimization problemsrdquo IEEE Transac-tions on Neural Networks vol 20 no 6 pp 1024ndash1038 2009

[11] Y Xia and J Wang ldquoA recurrent neural network for solvingnonlinear convex programs subject to linear constraintsrdquo IEEETransactions on Neural Networks vol 16 no 2 pp 379ndash3862005

[12] X B Gao and L Z Liao ldquoA new one-layer recurrent neuralnetwork for nonsmooth convex optimization subject to linear

equality constraintsrdquo IEEE Transactions on Neural Networksvol 21 pp 918ndash929 2010

[13] J J Hopfield and DW Tank ldquoNeural computation of decisionsin optimization problemsrdquo Biological Cybernetics vol 52 no 3pp 141ndash152 1985

[14] D W Tank and J J Hopfield ldquoSimple neural optimizationnetwork an AA converter signal decision circuit and alinear programming circuitrdquo IEEE Transactions on Circuits andSystems vol 33 no 5 pp 533ndash541 1986

[15] M P Kennedy and L O Chua ldquoNeural networks for nonlinearprogrammingrdquo IEEE Transactions on Circuits and Systems vol35 no 5 pp 554ndash562 1988

[16] E K P Chong S Hui and S H Zak ldquoAn analysis of a classof neural networks for solving linear programming problemsrdquoIEEE Transactions on Automatic Control vol 44 no 11 pp1995ndash2006 1999

[17] M Forti P Nistri and M Quincampoix ldquoGeneralized neuralnetwork for nonsmooth nonlinear programming problemsrdquoIEEE Transactions on Circuits and Systems I Regular Papers vol51 no 9 pp 1741ndash1754 2004

[18] X Xue and W Bian ldquoSubgradient-based neural networks fornonsmooth convex optimization problemsrdquo IEEE TransactionsonCircuits and Systems I Regular Papers vol 55 no 8 pp 2378ndash2391 2008

[19] XChenMKNg andC Zhang ldquoNonconvex lp-regularizationand box constrained model for image restorationrdquo IEEE Trans-actins on Image Processing vol 21 pp 4709ndash4721 2010

[20] W Bian and X Chen ldquoNeural network for nonsmooth non-convex constrained minimization via smooth approximationrdquoIEEE Transactions on Neural Networks and Learning Systemsvol 25 pp 545ndash556 2014

[21] X Chen ldquoSmoothing methods for complementarity problemsand their applications a surveyrdquo Journal of the OperationsResearch Society of Japan vol 43 no 1 pp 32ndash47 2000

[22] R T Rockafellar and RWets J-BVariational Analysis SpringerBerlin Germany 1998

[23] X Xue and W Bian ldquoA project neural network for solvingdegenerate convex quadratic programrdquo Neurocomputing vol70 no 13ndash15 pp 2449ndash2459 2007

[24] Q Liu and J Cao ldquoA recurrent neural network based onprojection operator for extended general variational inequal-itiesrdquo IEEE Transactions on Systems Man and Cybernetics BCybernetics vol 40 no 3 pp 928ndash938 2010

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 2: Research Article Neural Network for Sparse Reconstructiondownloads.hindawi.com/journals/mpe/2014/107620.pdfa class of nonsmooth convex optimization problems. In [ ], a neural network

2 Mathematical Problems in Engineering

A class of signal recovery problems can be formulated as

min 119860119909 minus 1198872

+ 1205821198631199091

st 119909 isin Ω

(4)

where119863 is a linear operator 120582 is the regularization parameterthat controls the trade-off between the regularization termand the data-fitting term and constraint set Ω is a closedconvex subset of R119899

Optimization problems arise in a variety of scientificand engineering applications and they really need real timesolutions Since the computing time greatly depends on thedimension and the structure of the optimization problemsnumerical algorithms are usually less effective in large scaleor real time optimization problems In many applicationsreal time optimal solutions are usually imperative such ason-board signal processing and robot motion planning andcontrol One promising approach to handle these problems isto employ artificial neural network During recent decadesneural dynamical method for solving optimization problemshas been a major area in neural network research based oncircuit implementation [9ndash12] First the structure of a neuralnetwork can be implemented physically by designated hard-ware such as application-specific integrated circuits where thecomputational procedure is distributed and parallel This letsthe neural network approach solve optimization problems inrunning time at the order of magnitude much faster thanconventional optimization algorithms executed on general-purpose digital computers Second neural networks can solvemany optimization problems with time-varying parametersThird the dynamical and ODE techniques can be appliedto the continuous-time neural networks And recent reportshave shown that the global convergence can be obtained bythe neural network approach under some weaker conditions

Since the neural network was first proposed for solvinglinear [13 14] and nonlinear [15] programming problemsmany researchers were inspired to develop neural networksfor optimization Many types of neural networks have beenproposed to solve various optimization problems for exam-ple the recurrent neural network the Lagrangian networkthe deterministic annealing network the projection-typeneural network a generalized neural network and so forthIn [16] Chong et al proposed a neural network for linearprogramming problem with finite time convergence In [17]a generalized neural network was presented for solvinga class of nonsmooth convex optimization problems In[18] a neural network was defined by using the penaltyfunction method and differential inclusion for solving aclass of nonsmooth convex optimization problems In factin many important applications neural network built by adifferential inclusion is an important method to solve a classof nonsmooth optimization problems One has to mentionthat the optimization problems are not differentiable in manyimportant applications Moreover the neural networks forsmooth optimization problems required the gradients of theobjective and constrained functions in such neural networksSo these networks cannot solve nonsmooth optimizationproblems Using smoothing techniques in neural networkis an effective method for solving nonsmooth optimization

problems [19 20] The main feature of smoothing method isto approximate the nonsmooth functions by parameterizedsmooth functions [21 22] By smoothing approximations wecan give a class of smooth functions which converge to theoriginal nonsmooth function and whose gradients convergeto the subgradient of nonsmooth function For solving manyconstrained optimization problems projection is a simpleand effective method for solving the constraints In [23 24]projection had been used in neural networks for solving somekind of constrained optimization problems

Basing on the advantages of the neural networks inthis paper we will propose a neural network and use somemathematical techniques to solve optimization problem (4)The problem (4) is nonsmooth Many neural networks aremodeled by differential inclusions which have the difficultyin the choice of the right set-valued map In this paperwe will introduce a smoothing function to overcome thisproblem Using smoothing techniques into neural network isan interesting and promise method for solving (4)

NotationThroughout this paper sdot denotes the 1198972norm and

sdot 1denotes the 119897

1norm

2 Preliminary Results

In this section we will introduce several basic definitions andlemmas which are used in the development

Definition 1 Suppose that 119891 is Lipschitz near 119909 the general-ized directional derivative of 119891 at 119909 in the direction V isin R119899 isgiven by

1198910

(119909 V) = lim sup119910rarr119909119903rarr0

+

119891 (119910 + 119903V) minus 119891 (119910)

119903 (5)

Furthermore the Clarke generalized gradient of 119891 at 119909 isdefined as

120597119891 (119909) = 120585 isin R119899

1198910

(119909 V) ge ⟨V 120585⟩ forallV isin R119899

(6)

Moreover if119891 R119899 rarr R is a convex function then it hasthe following properties as well

Proposition 2 If 119891 R119899 rarr R is a convex function thefollowing property holds

119891 (119909) minus 119891 (119910) le ⟨119901 119909 minus 119910⟩ forall119909 119910 isin R119899

forall119901 isin 120597119891 (119909)

(7)

Since the constraint set of (4) is a closed convex subsetof R119899 then we use the projection operator to handle theconstraint The projection operator of 119909 to the closed convexsubset Ω is defined by

119875Ω(119909) = argmin

119906isinΩ

119906 minus 119909 (8)

The projection operator has the following properties

Proposition 3 Consider the following

⟨V minus 119875Ω(V) 119875Ω(V) minus 119906⟩ ge 0 forallV isin R

119899

119906 isin Ω

1003817100381710038171003817119875Ω (119906) minus 119875Ω(V)1003817100381710038171003817 le 119906 minus V forall119906 V isin R

119899

(9)

Mathematical Problems in Engineering 3

Definition 4 Let ℎ R119899 rarr R be a locally Lipschitz functionWe call ℎ R119899 times [0 +infin) rarr R a smoothing function of ℎ ifℎ satisfies the following conditions

(i) For any fixed 120583 gt 0 ℎ(sdot 120583) is continuously differ-entiable in R119899 and for any fixed 119909 isin R119899 ℎ(119909 sdot) isdifferentiable in [0 +infin)

(ii) For any fixed 119909 isin R119899 lim120583darr0

ℎ(119909 120583) = ℎ(119909)(iii) There is a positive constant 120581

ℎgt 0 such that

10038161003816100381610038161003816nabla120583ℎ (119909 120583)

10038161003816100381610038161003816le 120581ℎ forall120583 isin [0 +infin) 119909 isin R

119899

(10)

(iv) lim119911rarr119909120583darr0

nabla119911ℎ(119911 120583) sube 120597ℎ(119909)

From the above definition we can get that for any fixed119909 isin R119899

lim119911rarr119909120583darr0

ℎ (119911 120583) = ℎ (119909)

10038161003816100381610038161003816ℎ (119909 120583) minus ℎ (119909)

10038161003816100381610038161003816le 120581ℎ120583 forall120583 isin [0 +infin) 119909 isin R

119899

(11)

Next we present a smoothing function of the absolutevalue function which is defined by

120593 (119910 120583) =

10038161003816100381610038161199101003816100381610038161003816 if 1003816100381610038161003816119910

1003816100381610038161003816 ge 120583

1199102

2120583+120583

2if 1003816100381610038161003816119910

1003816100381610038161003816 lt 120583(12)

Proposition 5 (see [21]) Consider the following(i) 120593(119910 120583) is continuously differentiable about 119910 in R for

any fixed 120583 gt 0 and differentiable about 120583 for any fixed119910 isin R

(ii) 0 le nabla120583120593(119910 120583) le 1 for all 119910 isin R for all 120583 isin (0 1]

(iii) 0 le 120593(119910 120583)minus |119910| le 1205832 for all 119910 isin R for all 120583 isin (0 1](iv) 120593(119910 120583) is convex about 119910 for any fixed 120583 and

lim120583darr0

nabla119910120593(119910 120583) sube 120597|119910|

3 Theoretical Results

In (4) 119863 isin R119898times119899 can be rewritten as 119863 = (1198891 1198892 119889

119898)119879

where 119889119894(119894 = 1 2 119898) is an 119899 dimensional vector Then

(4) can be rewritten as

min 119860119909 minus 1198872

+ 120582

119898

sum

119894=1

10038161003816100381610038161003816119889119879

11989411990910038161003816100381610038161003816

st 119909 isin Ω

(13)

In the following we use sum119898

119894=1120593(119889119879

119894119909 120583) to approximate

sum119898

119894=1|119889119879

119894119909| From the idea of the projected gradient method

we construct our neural network as follows

(119905) = minus119909 (119905) + 119875Ω[119909 (119905) minus 2119860

119879

(119860119909 (119905) minus 119887)

minus120582

119898

sum

119894=1

nabla119909120593 (119889119879

119894119909 (119905) 120583 (119905))]

(14)

where 119909(0) = 1199090 120583(119905) = 119890

minus119905 and 119875Ωis the projection operator

onΩNext we will give some analysis on the proposed neural

network (14)

Theorem 6 For any initial point 1199090isin Ω there is a global and

uniformly bounded solution of (14)

Proof The right hand of (14) is continuous about 119909 and 119905then there is a local solution of (14) with 119909

0isin Ω And we

assume that [0 119879) is themaximal existence interval of 119905 Firstwe prove that 119909(119905) isin Ω for all 119905 isin [0 119879) Obviously (14) canbe rewritten as

(119905) + 119909 (119905) = 119875Ω[119909 (119905) minus 2119860

119879

(119860119909 (119905) minus 119887)

minus120582

119898

sum

119894=1

nabla119909120593 (119889119879

119894119909 (119905) 120583 (119905))]

(15)

From the integration about the above differential equa-tion we have

119909 (119905) = 119890minus119905

1199090+ (1 minus 119890

119905

)int

119905

0

119896 (119904)119890119904

119890119905 minus 1119889119904 (16)

where 119896(119905) = 119875Ω[119909(119905) minus 2119860

119879

(119860119909(119905) minus 119887) minus 120582sum119898

119894=1nabla119909120593(119889119879

119894119909(119905)

120583(119905))]Since int119905

0

(119890119904

(119890119905

minus 1))119889119904 = 1 1199090isin Ω and Ω is a closed

convex subset we confirm that

119909 (119905) isin Ω forall119905 isin [0 119879) (17)

Differentiating 119860119909(119905) minus 1198872

+120582sum119898

119894=1120593(119889119879

119894119909(119905) 120583(119905)) along

this solution of (14) we obtain

119889

119889119905[119860119909 (119905) minus 119887

2

+ 120582

119898

sum

119894=1

120593 (119889119879

119894119909 (119905) 120583 (119905))]

= ⟨2119860119879

(119860119909 (119905) minus 119887) (119905)⟩

+ 120582

119898

sum

119894=1

[⟨nabla119909120593 (119889119879

119894119909 (119905) 120583 (119905)) (119905)⟩

+nabla120583120593 (119889119879

119894119909 (119905) 120583 (119905)) (119905)]

le ⟨2119860119879

(119860119909 (119905) minus 119887) + 120582

119898

sum

119894=1

nabla119909120593 (119889119879

119894119909 (119905) 120583 (119905)) (119905)⟩

(18)

Using the inequality of project operator to (14) we obtain

⟨2119860119879

(119860119909 (119905) minus 119887) + 120582

119898

sum

119894=1

nabla119909120593 (119889119879

119894119909 (119905) 120583 (119905)) (119905)⟩

le minus (119905)2

(19)

4 Mathematical Problems in Engineering

Thus

119889

119889119905[119860119909 (119905) minus 119887

2

+ 120582

119898

sum

119894=1

120593 (119889119879

119894119909 (119905) 120583 (119905))] le minus (119905)

2

(20)

which follows that 119860119909(119905) minus 1198872

+ 120582sum119898

119894=1120593(119889119879

119894119909(119905) 120583(119905)) is

nonincreasing along the solution of (14) On the other handby Proposition 5 we know that

119860119909(0) minus 1198872

+ 120582

119898

sum

119894=1

120593 (119889119879

119894119909 (0) 120583 (0))

ge 119860119909 (119905) minus 1198872

+ 120582

119898

sum

119894=1

120593 (119889119879

119894119909 (119905) 120583 (119905))

ge 119860119909 (119905) minus 1198872

+ 120582119863119909 (119905)1

(21)

Thus 119909(119905) is bounded on [0 119879) Using the extension the-orem the solution of (14) is globally existent and uniformlybounded

Theorem 7 For any initial point 1199090isin Ω the solution of (14)

is unique and satisfies the following

(i) (119905) is nonincreasing on [0 +infin) andlim119905rarr+infin

(119905) = 0(ii) the solution of (14) is convergent to the optimal solution

set of (4)

Proof Suppose that there exist two solutions 119909 [0infin) rarr

R119899 and 119910 [0infin) rarr R119899 of (14) with initial point 1199090= 1199100

which means that

(119905) = minus119909 (119905) + 119875Ω[119909 (119905) minus 2119860

119879

(119860119909 (119905) minus 119887)

minus120582

119898

sum

119894=1

nabla119909120593 (119889119879

119894119909 (119905) 120583 (119905))]

119910 (119905) = minus119910 (119905) + 119875Ω[119910 (119905) minus 2119860

119879

(119860119909 (119905) minus 119887)

minus120582

119898

sum

119894=1

nabla119910120593 (119889119879

119894119910 (119905) 120583 (119905))]

(22)

Thus

119889

119889119905

1

2

1003817100381710038171003817119909(119905) minus 119910 (119905)10038171003817100381710038172

= ⟨119909 (119905) minus 119910 (119905) (119905) minus 119910 (119905)⟩

= minus1003817100381710038171003817119909 (119905) minus 119910 (119905)

10038171003817100381710038172

+ ⟨119909 (119905) minus 119910 (119905) 119875Ω[1205851(119905)] minus 119875

Ω[1205852(119905)]⟩

(23)

where 1205851(119905) = 119909(119905)minus2119860

119879

(119860119909(119905)minus119887)minus120582sum119898

119894=1nabla119909120593(119889119879

119894119909(119905) 120583(119905))

and 1205852(119905) = 119910(119905) minus 2119860

119879

(119860119909(119905) minus 119887) minus 120582sum119898

119894=1nabla119910120593(119889119879

119894119910(119905) 120583(119905))

From the expression of119875Ωand 119909(119905) 119910(119905) isin Ω for all 119905 ge 0

we get

⟨1205851(119905) minus 119875

Ω[1205851(119905)] 119909 (119905) minus 119910 (119905)⟩ = 0 forall119905 ge 0

⟨1205852(119905) minus 119875

Ω[1205852(119905)] 119909 (119905) minus 119910 (119905)⟩ = 0 forall119905 ge 0

(24)

Thus we have

⟨119909 (119905) minus 119910 (119905) 119875Ω[1205851(119905)] minus 119875

Ω[1205852(119905)]⟩

= ⟨119909 (119905) minus 119910 (119905) 1205851(119905) minus 120585

2(119905)⟩

=1003817100381710038171003817119909 (119905) minus 119910 (119905)

10038171003817100381710038172

minus 21003817100381710038171003817119860 (119909 (119905) minus 119910 (119905))

10038171003817100381710038172

minus 120582

119898

sum

119894=1

⟨119909 (119905) minus 119910 (119905) nabla119909120593 (119889119879

119894119909 (119905) 120583 (119905))

minusnabla119910120593 (119889119879

119894119910 (119905) 120583 (119905))⟩

(25)

Since120593(119889119879119894119910 120583) (119894 = 1 2 119898) is convex about119910 for any

fixed 120583 we have

⟨119909 minus 119910 nabla119909120593 (119889119879

119894119909 120583) minus nabla

119910120593 (119889119879

119894119910 120583)⟩ ge 0 forall119909 119910 isin R

119899

(26)

Thus for all 119905 ge 0119898

sum

119894=1

⟨119909 (119905) minus 119910 (119905) nabla119909120593 (119889119879

119894119909 (119905) 120583 (119905))

minusnabla119910120593 (119889119879

119894119910 (119905) 120583 (119905)) ⟩ ge 0

(27)

Using (25) and (27) into (23) we have

119889

119889119905

1

2

1003817100381710038171003817119909(119905) minus 119910(119910)10038171003817100381710038172

= minus21003817100381710038171003817119860(119909(119905) minus 119910(119905))

10038171003817100381710038172

minus⟨119909 (119905) minus 119910 (119905) 120582

119898

sum

119894=1

[nabla119909120593 (119889119879

119894119909 (119905) 120583 (119905))

minusnabla119910120593 (119889119879

119894119910 (119905) 120583 (119905))]⟩ le 0

forall119905 isin [0 +infin)

(28)

By integrating (28) from 0 to 119905 we derive that

sup119905ge0

1003817100381710038171003817119909 (119905) minus 119910 (119905)1003817100381710038171003817 le

10038171003817100381710038171199090 minus 1199100

1003817100381710038171003817 (29)

Therefore 119909(119905) = 119910(119905) for all 119905 ge 0 when 1199090= 1199100 which

derives the uniqueness of the solution of (14)Let 119910(119905) = 119909(119905 + ℎ) where ℎ gt 0 (29) implies that

119909 (119905 + ℎ) minus 119909 (119905) le 119909 (ℎ) minus 119909 (0) forall119905 ge 0 (30)

Therefore 119905 rarr (119905) is nonincreasing

Mathematical Problems in Engineering 5

From (20) we obtain that 119860119909(119905) minus 1198872

+120582sum119898

119894=1120593(119889119879

119894119909(119905)

120583(119905)) is nonincreasing and bounded form below on [0 +infin)therefore we have that

lim119905rarr+infin

[119860119909 (119905) minus 1198872

+ 120582

119898

sum

119894=1

120593 (119889119879

119894119909 (119905) 120583 (119905))] exists (31)

Using Proposition 5 to the above result we obtain that

lim119905rarr+infin

[119860119909 (119905) minus 1198872

+ 120582119863119909 (119905)1] exists (32)

Moreover we have that

lim119905rarr+infin

119889

119889119905[119860119909 (119905) minus 119887

2

+ 120582

119898

sum

119894=1

120593 (119889119879

119894119909 (119905) 120583 (119905))] = 0 (33)

Combining (20) and (33) we confirm that

lim119905rarr+infin

(119905) = 0 (34)

Since 119909(119905) is uniformly bounded on the global intervalthere is a cluster point of it denoted as 119909lowast which follows thatthere exists an increasing sequence 119905

119899such that

lim119899rarr+infin

119905119899= +infin lim

119899rarr+infin

119909 (119905119899) = 119909lowast

(35)

Using the expression of (14) and lim119905rarr+infin

(119905) = 0 wehave

119909lowast

= 119875Ω[119909lowast

minus lim119899rarr+infin

(2119860119879

(119860119909 (119905119899) minus 119887)

+120582

119898

sum

119894=1

nabla119909120593 (119889119879

119894119909 (119905119899) 120583 (119905

119899)))]

(36)

Using Proposition 3 in the above equation we have

⟨ lim119899rarr+infin

(2119860119879

(119860119909 (119905119899) minus 119887) + 120582

119898

sum

119894=1

nabla119909120593 (119889119879

119894119909 (119905119899) 120583 (119905

119899)))

V minus 119909lowast

⟩ ge 0 forallV isin Ω

(37)

From Proposition 5 there exists 120585 isin 120597[119860119909(119905) minus 1198872

+

120582119863119909(119905)1] such that

⟨120585 V minus 119909lowast

⟩ ge 0 forallV isin Ω (38)

Therefore 119909lowast is a Clarke stationary point of (4) Since (4)is a convex programming 119909lowast is an optimal solution of (4)Owning to the random cluster point of 119909(119905) we know that

any cluster point of 119909(119905) is an optimal solution of (4) whichmeans that the solution of (14) converges to the optimalsolution set of (4)

4 Numerical Experiments

In this section we will give two numerical experiments tovalidate the theoretical results obtained in this paper and thegood performance of the proposed neural network in solvingthe sparse reconstruction problems

Example 1 In this experiment we will test an experiment forthe signal recovered with noise Every original signal withsparsity 1 means that there is only one sound at time pointWe use the followingMATLAB codes to generate a 100 lengthoriginal signal 119909 isin R5times100 mixingmatrix119860 isin R5times4 observedsignal 119887 isin R4times100 and noise 119899 isin R4times100

s=zeros(5100)

for l=1100

q=randperm(5)s(q(12)l) = (2lowastrandn(21))

end

A=randn(35)

n = 005 lowast randn(3100)

b=Alowasts-n

We denote 119904lowast as the recovered signal using our methodFigures 1(a)-2(a) show the original observed and recoveredsignals using (14) Figure 2(b) presents the convergence ofsignal-to-noise ratio (SNR) along the solution of the pro-posed neural network From this figure we see that ourmethod recovers this random original effectively And weshould state that the SNR of the recovered signal is 2215 dBwhere

SNR =

119871

sum

119894=1

minus1

11987120lg(

1003817100381710038171003817119904lowast

(119897) minus 119904 (119897)10038171003817100381710038172

119904 (119897)2

) (39)

Example 2 In this experiment we perform the proposednetwork (14) on the restoration of 20 times 20 circle image Theobserved image is distorted from the unknown true imagemainly by two factors the blurring and the randomnoiseTheblurring is a 2119863 Gaussian function

ℎ (119894 119895) = 119890minus2(1198943)

2minus2(1198953)

2

(40)

which is truncated such that the function has a supportof 7 times 7 A Gaussian noise with zero mean and standardderivation of 005 dB is added to the blurred image Figures3(a) and 3(b) present the original and the observed imagesrespectively The peak signal-to-noise ratio (PSNR) of theobserved image is 1687 dB Denote 119909

119900and 119909

119887as the original

6 Mathematical Problems in Engineering

0 20 40 60 80 100

0

10

minus10

0 20 40 60 80 100

0

10

minus10

0 20 40 60 80 100

0

10

minus10

0

20 40 60 80 1000

5

minus5

(a)

0

5

10

0

5

0 20 40 60 80 100

0 20 40 60 80 100

0 20 40 60 80 100minus5

minus5

0

5

minus5

(b)

Figure 1 (a) Original signals (b) observed signals

0 20 40 60 80 100

0

10

minus10

0 20 40 60 80 100

0

10

minus10

0 20 40 60 80 100

0

10

minus10

0

20 40 60 80 1000

5

minus5

(a)

0 20 40 60 80 1005

10

15

20

25

30

35

40

45

(b)

Figure 2 (a) Recovered signals (b) the convergence of SNR(119909(119905))

and corresponding observed images and use the PSNR toevaluate the quality of the restored image that is

PSNR (119909) = minus10log10

1003817100381710038171003817119909 minus 119909119900

1003817100381710038171003817

20 times 20 (41)

We use problem (13) to recover this image where we let120582 = 0017 Ω = 119909 0 le 119909 le 119890 and

119863 = (1198711otimes 119868

119868 otimes 1198711

) with 1198711= (

1 minus1

1 minus1

d d1 minus1

) (42)

Choose 1199090= 119875Ω(119887) The recovered image by (14) with

1199090is figured in Figure 3(c) with PSNR = 1965 dB The

convergence of the objective value and PSNR(119909(119905)) along thesolution of (14) with initial point 119909

0is presented in Figures

4(a) and 4(b) From Figures 4(a) and 4(b) we find that theobjective value is monotonely decreasing and the PSNR ismonotonely increasing along the solution of (14)

5 Conclusion

Basing on the smoothing approximation technique and pro-jected gradient method we construct a neural network mod-eled by a differential equation to solve a class of constrainednonsmooth convex optimization problems which have wideapplications in sparse reconstruction The proposed networkhas a unique and bounded solution with any initial point in

Mathematical Problems in Engineering 7

2 4 6 8 10 12 14 16 18 20

2

4

6

8

10

12

14

16

18

20

(a)

2 4 6 8 10 12 14 16 18 20

2

4

6

8

10

12

14

16

18

20

(b)

2 4 6 8 10 12 14 16 18 20

2

4

6

8

10

12

14

16

18

20

(c)

Figure 3 (a) Original image (b) observed image (c) recovered image

0 10 20 30 40 50 60 70 80

192

21

11

12

13

14

15

16

17

18

f(x(t))

t

(a)

0 10 20 30 40 50 60 70 8017

175

18

185

19

195

20

205

PSNT(x(t))

t

(b)

Figure 4 (a) Convergence of the objective value (b) convergence of PSNR(119909(119905))

8 Mathematical Problems in Engineering

the feasible region Moreover the solution of proposed net-work converges to the solution set of the optimization prob-lem Simulation results on numerical examples are elaboratedupon to substantiate the effectiveness and performance of theneural network

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

Theauthors would like to thank the Editor-in-Chief ProfessorHuaiqin Wu and the three anonymous reviewers for theirinsightful and constructive comments which help to enrichthe content and improve the presentation of the results in thispaper This work is supported by the Fundamental ResearchFunds for the Central Universities (DL12EB04) and NationalNatural Science Foundation of China (31370565)

References

[1] Y C Eldar and M Mishali ldquoRobust recovery of signalsfrom a structured union of subspacesrdquo IEEE Transactions onInformation Theory vol 55 no 11 pp 5302ndash5316 2009

[2] R Saab O Yilmaz M J McKeown and R AbugharbiehldquoUnderdetermined anechoic blind source separation via l

119902-

basis-pursuit With qlt 1rdquo IEEE Transactions on Signal Process-ing vol 55 no 8 pp 4004ndash4017 2007

[3] D L Donoho ldquoCompressed sensingrdquo IEEE Transactions onInformation Theory vol 52 no 4 pp 1289ndash1306 2006

[4] L B Montefusco D Lazzaro and S Papi ldquoNonlinear filteringfor sparse signal recovery from incomplete measurementsrdquoIEEE Transactions on Signal Processing vol 57 no 7 pp 2494ndash2502 2009

[5] Y Xiang S K Ng andV KNguyen ldquoBlind separation ofmutu-ally correlated sources using precodersrdquo IEEE Transactions onNeural Networks vol 21 no 1 pp 82ndash90 2010

[6] M K Ng R H Chan and W-C Tang ldquoFast algorithm fordeblurringmodels with Neumann boundary conditionsrdquo SIAMJournal on Scientific Computing vol 21 no 3 pp 851ndash866 1999

[7] D L Donoho ldquoNeighborly polytopes and sparse solutions ofunderdetermined linear equationsrdquo Tech Rep Department ofStatistics Stanford University Standford Calif USA 2005

[8] R Tibshirani ldquoRegression shrinkage and selection via the lassordquoJournal of the Royal Statistical Society B vol 58 pp 267ndash2881996

[9] A Cichocki and R Unbehauen Neural Networks for Optimiza-tion and Signal Processing Wiley London UK 1993

[10] W Bian and X Xue ldquoSubgradient-based neural networks fornonsmooth nonconvex optimization problemsrdquo IEEE Transac-tions on Neural Networks vol 20 no 6 pp 1024ndash1038 2009

[11] Y Xia and J Wang ldquoA recurrent neural network for solvingnonlinear convex programs subject to linear constraintsrdquo IEEETransactions on Neural Networks vol 16 no 2 pp 379ndash3862005

[12] X B Gao and L Z Liao ldquoA new one-layer recurrent neuralnetwork for nonsmooth convex optimization subject to linear

equality constraintsrdquo IEEE Transactions on Neural Networksvol 21 pp 918ndash929 2010

[13] J J Hopfield and DW Tank ldquoNeural computation of decisionsin optimization problemsrdquo Biological Cybernetics vol 52 no 3pp 141ndash152 1985

[14] D W Tank and J J Hopfield ldquoSimple neural optimizationnetwork an AA converter signal decision circuit and alinear programming circuitrdquo IEEE Transactions on Circuits andSystems vol 33 no 5 pp 533ndash541 1986

[15] M P Kennedy and L O Chua ldquoNeural networks for nonlinearprogrammingrdquo IEEE Transactions on Circuits and Systems vol35 no 5 pp 554ndash562 1988

[16] E K P Chong S Hui and S H Zak ldquoAn analysis of a classof neural networks for solving linear programming problemsrdquoIEEE Transactions on Automatic Control vol 44 no 11 pp1995ndash2006 1999

[17] M Forti P Nistri and M Quincampoix ldquoGeneralized neuralnetwork for nonsmooth nonlinear programming problemsrdquoIEEE Transactions on Circuits and Systems I Regular Papers vol51 no 9 pp 1741ndash1754 2004

[18] X Xue and W Bian ldquoSubgradient-based neural networks fornonsmooth convex optimization problemsrdquo IEEE TransactionsonCircuits and Systems I Regular Papers vol 55 no 8 pp 2378ndash2391 2008

[19] XChenMKNg andC Zhang ldquoNonconvex lp-regularizationand box constrained model for image restorationrdquo IEEE Trans-actins on Image Processing vol 21 pp 4709ndash4721 2010

[20] W Bian and X Chen ldquoNeural network for nonsmooth non-convex constrained minimization via smooth approximationrdquoIEEE Transactions on Neural Networks and Learning Systemsvol 25 pp 545ndash556 2014

[21] X Chen ldquoSmoothing methods for complementarity problemsand their applications a surveyrdquo Journal of the OperationsResearch Society of Japan vol 43 no 1 pp 32ndash47 2000

[22] R T Rockafellar and RWets J-BVariational Analysis SpringerBerlin Germany 1998

[23] X Xue and W Bian ldquoA project neural network for solvingdegenerate convex quadratic programrdquo Neurocomputing vol70 no 13ndash15 pp 2449ndash2459 2007

[24] Q Liu and J Cao ldquoA recurrent neural network based onprojection operator for extended general variational inequal-itiesrdquo IEEE Transactions on Systems Man and Cybernetics BCybernetics vol 40 no 3 pp 928ndash938 2010

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 3: Research Article Neural Network for Sparse Reconstructiondownloads.hindawi.com/journals/mpe/2014/107620.pdfa class of nonsmooth convex optimization problems. In [ ], a neural network

Mathematical Problems in Engineering 3

Definition 4 Let ℎ R119899 rarr R be a locally Lipschitz functionWe call ℎ R119899 times [0 +infin) rarr R a smoothing function of ℎ ifℎ satisfies the following conditions

(i) For any fixed 120583 gt 0 ℎ(sdot 120583) is continuously differ-entiable in R119899 and for any fixed 119909 isin R119899 ℎ(119909 sdot) isdifferentiable in [0 +infin)

(ii) For any fixed 119909 isin R119899 lim120583darr0

ℎ(119909 120583) = ℎ(119909)(iii) There is a positive constant 120581

ℎgt 0 such that

10038161003816100381610038161003816nabla120583ℎ (119909 120583)

10038161003816100381610038161003816le 120581ℎ forall120583 isin [0 +infin) 119909 isin R

119899

(10)

(iv) lim119911rarr119909120583darr0

nabla119911ℎ(119911 120583) sube 120597ℎ(119909)

From the above definition we can get that for any fixed119909 isin R119899

lim119911rarr119909120583darr0

ℎ (119911 120583) = ℎ (119909)

10038161003816100381610038161003816ℎ (119909 120583) minus ℎ (119909)

10038161003816100381610038161003816le 120581ℎ120583 forall120583 isin [0 +infin) 119909 isin R

119899

(11)

Next we present a smoothing function of the absolutevalue function which is defined by

120593 (119910 120583) =

10038161003816100381610038161199101003816100381610038161003816 if 1003816100381610038161003816119910

1003816100381610038161003816 ge 120583

1199102

2120583+120583

2if 1003816100381610038161003816119910

1003816100381610038161003816 lt 120583(12)

Proposition 5 (see [21]) Consider the following(i) 120593(119910 120583) is continuously differentiable about 119910 in R for

any fixed 120583 gt 0 and differentiable about 120583 for any fixed119910 isin R

(ii) 0 le nabla120583120593(119910 120583) le 1 for all 119910 isin R for all 120583 isin (0 1]

(iii) 0 le 120593(119910 120583)minus |119910| le 1205832 for all 119910 isin R for all 120583 isin (0 1](iv) 120593(119910 120583) is convex about 119910 for any fixed 120583 and

lim120583darr0

nabla119910120593(119910 120583) sube 120597|119910|

3 Theoretical Results

In (4) 119863 isin R119898times119899 can be rewritten as 119863 = (1198891 1198892 119889

119898)119879

where 119889119894(119894 = 1 2 119898) is an 119899 dimensional vector Then

(4) can be rewritten as

min 119860119909 minus 1198872

+ 120582

119898

sum

119894=1

10038161003816100381610038161003816119889119879

11989411990910038161003816100381610038161003816

st 119909 isin Ω

(13)

In the following we use sum119898

119894=1120593(119889119879

119894119909 120583) to approximate

sum119898

119894=1|119889119879

119894119909| From the idea of the projected gradient method

we construct our neural network as follows

(119905) = minus119909 (119905) + 119875Ω[119909 (119905) minus 2119860

119879

(119860119909 (119905) minus 119887)

minus120582

119898

sum

119894=1

nabla119909120593 (119889119879

119894119909 (119905) 120583 (119905))]

(14)

where 119909(0) = 1199090 120583(119905) = 119890

minus119905 and 119875Ωis the projection operator

onΩNext we will give some analysis on the proposed neural

network (14)

Theorem 6 For any initial point 1199090isin Ω there is a global and

uniformly bounded solution of (14)

Proof The right hand of (14) is continuous about 119909 and 119905then there is a local solution of (14) with 119909

0isin Ω And we

assume that [0 119879) is themaximal existence interval of 119905 Firstwe prove that 119909(119905) isin Ω for all 119905 isin [0 119879) Obviously (14) canbe rewritten as

(119905) + 119909 (119905) = 119875Ω[119909 (119905) minus 2119860

119879

(119860119909 (119905) minus 119887)

minus120582

119898

sum

119894=1

nabla119909120593 (119889119879

119894119909 (119905) 120583 (119905))]

(15)

From the integration about the above differential equa-tion we have

119909 (119905) = 119890minus119905

1199090+ (1 minus 119890

119905

)int

119905

0

119896 (119904)119890119904

119890119905 minus 1119889119904 (16)

where 119896(119905) = 119875Ω[119909(119905) minus 2119860

119879

(119860119909(119905) minus 119887) minus 120582sum119898

119894=1nabla119909120593(119889119879

119894119909(119905)

120583(119905))]Since int119905

0

(119890119904

(119890119905

minus 1))119889119904 = 1 1199090isin Ω and Ω is a closed

convex subset we confirm that

119909 (119905) isin Ω forall119905 isin [0 119879) (17)

Differentiating 119860119909(119905) minus 1198872

+120582sum119898

119894=1120593(119889119879

119894119909(119905) 120583(119905)) along

this solution of (14) we obtain

119889

119889119905[119860119909 (119905) minus 119887

2

+ 120582

119898

sum

119894=1

120593 (119889119879

119894119909 (119905) 120583 (119905))]

= ⟨2119860119879

(119860119909 (119905) minus 119887) (119905)⟩

+ 120582

119898

sum

119894=1

[⟨nabla119909120593 (119889119879

119894119909 (119905) 120583 (119905)) (119905)⟩

+nabla120583120593 (119889119879

119894119909 (119905) 120583 (119905)) (119905)]

le ⟨2119860119879

(119860119909 (119905) minus 119887) + 120582

119898

sum

119894=1

nabla119909120593 (119889119879

119894119909 (119905) 120583 (119905)) (119905)⟩

(18)

Using the inequality of project operator to (14) we obtain

⟨2119860119879

(119860119909 (119905) minus 119887) + 120582

119898

sum

119894=1

nabla119909120593 (119889119879

119894119909 (119905) 120583 (119905)) (119905)⟩

le minus (119905)2

(19)

4 Mathematical Problems in Engineering

Thus

119889

119889119905[119860119909 (119905) minus 119887

2

+ 120582

119898

sum

119894=1

120593 (119889119879

119894119909 (119905) 120583 (119905))] le minus (119905)

2

(20)

which follows that 119860119909(119905) minus 1198872

+ 120582sum119898

119894=1120593(119889119879

119894119909(119905) 120583(119905)) is

nonincreasing along the solution of (14) On the other handby Proposition 5 we know that

119860119909(0) minus 1198872

+ 120582

119898

sum

119894=1

120593 (119889119879

119894119909 (0) 120583 (0))

ge 119860119909 (119905) minus 1198872

+ 120582

119898

sum

119894=1

120593 (119889119879

119894119909 (119905) 120583 (119905))

ge 119860119909 (119905) minus 1198872

+ 120582119863119909 (119905)1

(21)

Thus 119909(119905) is bounded on [0 119879) Using the extension the-orem the solution of (14) is globally existent and uniformlybounded

Theorem 7 For any initial point 1199090isin Ω the solution of (14)

is unique and satisfies the following

(i) (119905) is nonincreasing on [0 +infin) andlim119905rarr+infin

(119905) = 0(ii) the solution of (14) is convergent to the optimal solution

set of (4)

Proof Suppose that there exist two solutions 119909 [0infin) rarr

R119899 and 119910 [0infin) rarr R119899 of (14) with initial point 1199090= 1199100

which means that

(119905) = minus119909 (119905) + 119875Ω[119909 (119905) minus 2119860

119879

(119860119909 (119905) minus 119887)

minus120582

119898

sum

119894=1

nabla119909120593 (119889119879

119894119909 (119905) 120583 (119905))]

119910 (119905) = minus119910 (119905) + 119875Ω[119910 (119905) minus 2119860

119879

(119860119909 (119905) minus 119887)

minus120582

119898

sum

119894=1

nabla119910120593 (119889119879

119894119910 (119905) 120583 (119905))]

(22)

Thus

119889

119889119905

1

2

1003817100381710038171003817119909(119905) minus 119910 (119905)10038171003817100381710038172

= ⟨119909 (119905) minus 119910 (119905) (119905) minus 119910 (119905)⟩

= minus1003817100381710038171003817119909 (119905) minus 119910 (119905)

10038171003817100381710038172

+ ⟨119909 (119905) minus 119910 (119905) 119875Ω[1205851(119905)] minus 119875

Ω[1205852(119905)]⟩

(23)

where 1205851(119905) = 119909(119905)minus2119860

119879

(119860119909(119905)minus119887)minus120582sum119898

119894=1nabla119909120593(119889119879

119894119909(119905) 120583(119905))

and 1205852(119905) = 119910(119905) minus 2119860

119879

(119860119909(119905) minus 119887) minus 120582sum119898

119894=1nabla119910120593(119889119879

119894119910(119905) 120583(119905))

From the expression of119875Ωand 119909(119905) 119910(119905) isin Ω for all 119905 ge 0

we get

⟨1205851(119905) minus 119875

Ω[1205851(119905)] 119909 (119905) minus 119910 (119905)⟩ = 0 forall119905 ge 0

⟨1205852(119905) minus 119875

Ω[1205852(119905)] 119909 (119905) minus 119910 (119905)⟩ = 0 forall119905 ge 0

(24)

Thus we have

⟨119909 (119905) minus 119910 (119905) 119875Ω[1205851(119905)] minus 119875

Ω[1205852(119905)]⟩

= ⟨119909 (119905) minus 119910 (119905) 1205851(119905) minus 120585

2(119905)⟩

=1003817100381710038171003817119909 (119905) minus 119910 (119905)

10038171003817100381710038172

minus 21003817100381710038171003817119860 (119909 (119905) minus 119910 (119905))

10038171003817100381710038172

minus 120582

119898

sum

119894=1

⟨119909 (119905) minus 119910 (119905) nabla119909120593 (119889119879

119894119909 (119905) 120583 (119905))

minusnabla119910120593 (119889119879

119894119910 (119905) 120583 (119905))⟩

(25)

Since120593(119889119879119894119910 120583) (119894 = 1 2 119898) is convex about119910 for any

fixed 120583 we have

⟨119909 minus 119910 nabla119909120593 (119889119879

119894119909 120583) minus nabla

119910120593 (119889119879

119894119910 120583)⟩ ge 0 forall119909 119910 isin R

119899

(26)

Thus for all 119905 ge 0119898

sum

119894=1

⟨119909 (119905) minus 119910 (119905) nabla119909120593 (119889119879

119894119909 (119905) 120583 (119905))

minusnabla119910120593 (119889119879

119894119910 (119905) 120583 (119905)) ⟩ ge 0

(27)

Using (25) and (27) into (23) we have

119889

119889119905

1

2

1003817100381710038171003817119909(119905) minus 119910(119910)10038171003817100381710038172

= minus21003817100381710038171003817119860(119909(119905) minus 119910(119905))

10038171003817100381710038172

minus⟨119909 (119905) minus 119910 (119905) 120582

119898

sum

119894=1

[nabla119909120593 (119889119879

119894119909 (119905) 120583 (119905))

minusnabla119910120593 (119889119879

119894119910 (119905) 120583 (119905))]⟩ le 0

forall119905 isin [0 +infin)

(28)

By integrating (28) from 0 to 119905 we derive that

sup119905ge0

1003817100381710038171003817119909 (119905) minus 119910 (119905)1003817100381710038171003817 le

10038171003817100381710038171199090 minus 1199100

1003817100381710038171003817 (29)

Therefore 119909(119905) = 119910(119905) for all 119905 ge 0 when 1199090= 1199100 which

derives the uniqueness of the solution of (14)Let 119910(119905) = 119909(119905 + ℎ) where ℎ gt 0 (29) implies that

119909 (119905 + ℎ) minus 119909 (119905) le 119909 (ℎ) minus 119909 (0) forall119905 ge 0 (30)

Therefore 119905 rarr (119905) is nonincreasing

Mathematical Problems in Engineering 5

From (20) we obtain that 119860119909(119905) minus 1198872

+120582sum119898

119894=1120593(119889119879

119894119909(119905)

120583(119905)) is nonincreasing and bounded form below on [0 +infin)therefore we have that

lim119905rarr+infin

[119860119909 (119905) minus 1198872

+ 120582

119898

sum

119894=1

120593 (119889119879

119894119909 (119905) 120583 (119905))] exists (31)

Using Proposition 5 to the above result we obtain that

lim119905rarr+infin

[119860119909 (119905) minus 1198872

+ 120582119863119909 (119905)1] exists (32)

Moreover we have that

lim119905rarr+infin

119889

119889119905[119860119909 (119905) minus 119887

2

+ 120582

119898

sum

119894=1

120593 (119889119879

119894119909 (119905) 120583 (119905))] = 0 (33)

Combining (20) and (33) we confirm that

lim119905rarr+infin

(119905) = 0 (34)

Since 119909(119905) is uniformly bounded on the global intervalthere is a cluster point of it denoted as 119909lowast which follows thatthere exists an increasing sequence 119905

119899such that

lim119899rarr+infin

119905119899= +infin lim

119899rarr+infin

119909 (119905119899) = 119909lowast

(35)

Using the expression of (14) and lim119905rarr+infin

(119905) = 0 wehave

119909lowast

= 119875Ω[119909lowast

minus lim119899rarr+infin

(2119860119879

(119860119909 (119905119899) minus 119887)

+120582

119898

sum

119894=1

nabla119909120593 (119889119879

119894119909 (119905119899) 120583 (119905

119899)))]

(36)

Using Proposition 3 in the above equation we have

⟨ lim119899rarr+infin

(2119860119879

(119860119909 (119905119899) minus 119887) + 120582

119898

sum

119894=1

nabla119909120593 (119889119879

119894119909 (119905119899) 120583 (119905

119899)))

V minus 119909lowast

⟩ ge 0 forallV isin Ω

(37)

From Proposition 5 there exists 120585 isin 120597[119860119909(119905) minus 1198872

+

120582119863119909(119905)1] such that

⟨120585 V minus 119909lowast

⟩ ge 0 forallV isin Ω (38)

Therefore 119909lowast is a Clarke stationary point of (4) Since (4)is a convex programming 119909lowast is an optimal solution of (4)Owning to the random cluster point of 119909(119905) we know that

any cluster point of 119909(119905) is an optimal solution of (4) whichmeans that the solution of (14) converges to the optimalsolution set of (4)

4 Numerical Experiments

In this section we will give two numerical experiments tovalidate the theoretical results obtained in this paper and thegood performance of the proposed neural network in solvingthe sparse reconstruction problems

Example 1 In this experiment we will test an experiment forthe signal recovered with noise Every original signal withsparsity 1 means that there is only one sound at time pointWe use the followingMATLAB codes to generate a 100 lengthoriginal signal 119909 isin R5times100 mixingmatrix119860 isin R5times4 observedsignal 119887 isin R4times100 and noise 119899 isin R4times100

s=zeros(5100)

for l=1100

q=randperm(5)s(q(12)l) = (2lowastrandn(21))

end

A=randn(35)

n = 005 lowast randn(3100)

b=Alowasts-n

We denote 119904lowast as the recovered signal using our methodFigures 1(a)-2(a) show the original observed and recoveredsignals using (14) Figure 2(b) presents the convergence ofsignal-to-noise ratio (SNR) along the solution of the pro-posed neural network From this figure we see that ourmethod recovers this random original effectively And weshould state that the SNR of the recovered signal is 2215 dBwhere

SNR =

119871

sum

119894=1

minus1

11987120lg(

1003817100381710038171003817119904lowast

(119897) minus 119904 (119897)10038171003817100381710038172

119904 (119897)2

) (39)

Example 2 In this experiment we perform the proposednetwork (14) on the restoration of 20 times 20 circle image Theobserved image is distorted from the unknown true imagemainly by two factors the blurring and the randomnoiseTheblurring is a 2119863 Gaussian function

ℎ (119894 119895) = 119890minus2(1198943)

2minus2(1198953)

2

(40)

which is truncated such that the function has a supportof 7 times 7 A Gaussian noise with zero mean and standardderivation of 005 dB is added to the blurred image Figures3(a) and 3(b) present the original and the observed imagesrespectively The peak signal-to-noise ratio (PSNR) of theobserved image is 1687 dB Denote 119909

119900and 119909

119887as the original

6 Mathematical Problems in Engineering

0 20 40 60 80 100

0

10

minus10

0 20 40 60 80 100

0

10

minus10

0 20 40 60 80 100

0

10

minus10

0

20 40 60 80 1000

5

minus5

(a)

0

5

10

0

5

0 20 40 60 80 100

0 20 40 60 80 100

0 20 40 60 80 100minus5

minus5

0

5

minus5

(b)

Figure 1 (a) Original signals (b) observed signals

0 20 40 60 80 100

0

10

minus10

0 20 40 60 80 100

0

10

minus10

0 20 40 60 80 100

0

10

minus10

0

20 40 60 80 1000

5

minus5

(a)

0 20 40 60 80 1005

10

15

20

25

30

35

40

45

(b)

Figure 2 (a) Recovered signals (b) the convergence of SNR(119909(119905))

and corresponding observed images and use the PSNR toevaluate the quality of the restored image that is

PSNR (119909) = minus10log10

1003817100381710038171003817119909 minus 119909119900

1003817100381710038171003817

20 times 20 (41)

We use problem (13) to recover this image where we let120582 = 0017 Ω = 119909 0 le 119909 le 119890 and

119863 = (1198711otimes 119868

119868 otimes 1198711

) with 1198711= (

1 minus1

1 minus1

d d1 minus1

) (42)

Choose 1199090= 119875Ω(119887) The recovered image by (14) with

1199090is figured in Figure 3(c) with PSNR = 1965 dB The

convergence of the objective value and PSNR(119909(119905)) along thesolution of (14) with initial point 119909

0is presented in Figures

4(a) and 4(b) From Figures 4(a) and 4(b) we find that theobjective value is monotonely decreasing and the PSNR ismonotonely increasing along the solution of (14)

5 Conclusion

Basing on the smoothing approximation technique and pro-jected gradient method we construct a neural network mod-eled by a differential equation to solve a class of constrainednonsmooth convex optimization problems which have wideapplications in sparse reconstruction The proposed networkhas a unique and bounded solution with any initial point in

Mathematical Problems in Engineering 7

2 4 6 8 10 12 14 16 18 20

2

4

6

8

10

12

14

16

18

20

(a)

2 4 6 8 10 12 14 16 18 20

2

4

6

8

10

12

14

16

18

20

(b)

2 4 6 8 10 12 14 16 18 20

2

4

6

8

10

12

14

16

18

20

(c)

Figure 3 (a) Original image (b) observed image (c) recovered image

0 10 20 30 40 50 60 70 80

192

21

11

12

13

14

15

16

17

18

f(x(t))

t

(a)

0 10 20 30 40 50 60 70 8017

175

18

185

19

195

20

205

PSNT(x(t))

t

(b)

Figure 4 (a) Convergence of the objective value (b) convergence of PSNR(119909(119905))

8 Mathematical Problems in Engineering

the feasible region Moreover the solution of proposed net-work converges to the solution set of the optimization prob-lem Simulation results on numerical examples are elaboratedupon to substantiate the effectiveness and performance of theneural network

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

Theauthors would like to thank the Editor-in-Chief ProfessorHuaiqin Wu and the three anonymous reviewers for theirinsightful and constructive comments which help to enrichthe content and improve the presentation of the results in thispaper This work is supported by the Fundamental ResearchFunds for the Central Universities (DL12EB04) and NationalNatural Science Foundation of China (31370565)

References

[1] Y C Eldar and M Mishali ldquoRobust recovery of signalsfrom a structured union of subspacesrdquo IEEE Transactions onInformation Theory vol 55 no 11 pp 5302ndash5316 2009

[2] R Saab O Yilmaz M J McKeown and R AbugharbiehldquoUnderdetermined anechoic blind source separation via l

119902-

basis-pursuit With qlt 1rdquo IEEE Transactions on Signal Process-ing vol 55 no 8 pp 4004ndash4017 2007

[3] D L Donoho ldquoCompressed sensingrdquo IEEE Transactions onInformation Theory vol 52 no 4 pp 1289ndash1306 2006

[4] L B Montefusco D Lazzaro and S Papi ldquoNonlinear filteringfor sparse signal recovery from incomplete measurementsrdquoIEEE Transactions on Signal Processing vol 57 no 7 pp 2494ndash2502 2009

[5] Y Xiang S K Ng andV KNguyen ldquoBlind separation ofmutu-ally correlated sources using precodersrdquo IEEE Transactions onNeural Networks vol 21 no 1 pp 82ndash90 2010

[6] M K Ng R H Chan and W-C Tang ldquoFast algorithm fordeblurringmodels with Neumann boundary conditionsrdquo SIAMJournal on Scientific Computing vol 21 no 3 pp 851ndash866 1999

[7] D L Donoho ldquoNeighborly polytopes and sparse solutions ofunderdetermined linear equationsrdquo Tech Rep Department ofStatistics Stanford University Standford Calif USA 2005

[8] R Tibshirani ldquoRegression shrinkage and selection via the lassordquoJournal of the Royal Statistical Society B vol 58 pp 267ndash2881996

[9] A Cichocki and R Unbehauen Neural Networks for Optimiza-tion and Signal Processing Wiley London UK 1993

[10] W Bian and X Xue ldquoSubgradient-based neural networks fornonsmooth nonconvex optimization problemsrdquo IEEE Transac-tions on Neural Networks vol 20 no 6 pp 1024ndash1038 2009

[11] Y Xia and J Wang ldquoA recurrent neural network for solvingnonlinear convex programs subject to linear constraintsrdquo IEEETransactions on Neural Networks vol 16 no 2 pp 379ndash3862005

[12] X B Gao and L Z Liao ldquoA new one-layer recurrent neuralnetwork for nonsmooth convex optimization subject to linear

equality constraintsrdquo IEEE Transactions on Neural Networksvol 21 pp 918ndash929 2010

[13] J J Hopfield and DW Tank ldquoNeural computation of decisionsin optimization problemsrdquo Biological Cybernetics vol 52 no 3pp 141ndash152 1985

[14] D W Tank and J J Hopfield ldquoSimple neural optimizationnetwork an AA converter signal decision circuit and alinear programming circuitrdquo IEEE Transactions on Circuits andSystems vol 33 no 5 pp 533ndash541 1986

[15] M P Kennedy and L O Chua ldquoNeural networks for nonlinearprogrammingrdquo IEEE Transactions on Circuits and Systems vol35 no 5 pp 554ndash562 1988

[16] E K P Chong S Hui and S H Zak ldquoAn analysis of a classof neural networks for solving linear programming problemsrdquoIEEE Transactions on Automatic Control vol 44 no 11 pp1995ndash2006 1999

[17] M Forti P Nistri and M Quincampoix ldquoGeneralized neuralnetwork for nonsmooth nonlinear programming problemsrdquoIEEE Transactions on Circuits and Systems I Regular Papers vol51 no 9 pp 1741ndash1754 2004

[18] X Xue and W Bian ldquoSubgradient-based neural networks fornonsmooth convex optimization problemsrdquo IEEE TransactionsonCircuits and Systems I Regular Papers vol 55 no 8 pp 2378ndash2391 2008

[19] XChenMKNg andC Zhang ldquoNonconvex lp-regularizationand box constrained model for image restorationrdquo IEEE Trans-actins on Image Processing vol 21 pp 4709ndash4721 2010

[20] W Bian and X Chen ldquoNeural network for nonsmooth non-convex constrained minimization via smooth approximationrdquoIEEE Transactions on Neural Networks and Learning Systemsvol 25 pp 545ndash556 2014

[21] X Chen ldquoSmoothing methods for complementarity problemsand their applications a surveyrdquo Journal of the OperationsResearch Society of Japan vol 43 no 1 pp 32ndash47 2000

[22] R T Rockafellar and RWets J-BVariational Analysis SpringerBerlin Germany 1998

[23] X Xue and W Bian ldquoA project neural network for solvingdegenerate convex quadratic programrdquo Neurocomputing vol70 no 13ndash15 pp 2449ndash2459 2007

[24] Q Liu and J Cao ldquoA recurrent neural network based onprojection operator for extended general variational inequal-itiesrdquo IEEE Transactions on Systems Man and Cybernetics BCybernetics vol 40 no 3 pp 928ndash938 2010

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 4: Research Article Neural Network for Sparse Reconstructiondownloads.hindawi.com/journals/mpe/2014/107620.pdfa class of nonsmooth convex optimization problems. In [ ], a neural network

4 Mathematical Problems in Engineering

Thus

119889

119889119905[119860119909 (119905) minus 119887

2

+ 120582

119898

sum

119894=1

120593 (119889119879

119894119909 (119905) 120583 (119905))] le minus (119905)

2

(20)

which follows that 119860119909(119905) minus 1198872

+ 120582sum119898

119894=1120593(119889119879

119894119909(119905) 120583(119905)) is

nonincreasing along the solution of (14) On the other handby Proposition 5 we know that

119860119909(0) minus 1198872

+ 120582

119898

sum

119894=1

120593 (119889119879

119894119909 (0) 120583 (0))

ge 119860119909 (119905) minus 1198872

+ 120582

119898

sum

119894=1

120593 (119889119879

119894119909 (119905) 120583 (119905))

ge 119860119909 (119905) minus 1198872

+ 120582119863119909 (119905)1

(21)

Thus 119909(119905) is bounded on [0 119879) Using the extension the-orem the solution of (14) is globally existent and uniformlybounded

Theorem 7 For any initial point 1199090isin Ω the solution of (14)

is unique and satisfies the following

(i) (119905) is nonincreasing on [0 +infin) andlim119905rarr+infin

(119905) = 0(ii) the solution of (14) is convergent to the optimal solution

set of (4)

Proof Suppose that there exist two solutions 119909 [0infin) rarr

R119899 and 119910 [0infin) rarr R119899 of (14) with initial point 1199090= 1199100

which means that

(119905) = minus119909 (119905) + 119875Ω[119909 (119905) minus 2119860

119879

(119860119909 (119905) minus 119887)

minus120582

119898

sum

119894=1

nabla119909120593 (119889119879

119894119909 (119905) 120583 (119905))]

119910 (119905) = minus119910 (119905) + 119875Ω[119910 (119905) minus 2119860

119879

(119860119909 (119905) minus 119887)

minus120582

119898

sum

119894=1

nabla119910120593 (119889119879

119894119910 (119905) 120583 (119905))]

(22)

Thus

119889

119889119905

1

2

1003817100381710038171003817119909(119905) minus 119910 (119905)10038171003817100381710038172

= ⟨119909 (119905) minus 119910 (119905) (119905) minus 119910 (119905)⟩

= minus1003817100381710038171003817119909 (119905) minus 119910 (119905)

10038171003817100381710038172

+ ⟨119909 (119905) minus 119910 (119905) 119875Ω[1205851(119905)] minus 119875

Ω[1205852(119905)]⟩

(23)

where 1205851(119905) = 119909(119905)minus2119860

119879

(119860119909(119905)minus119887)minus120582sum119898

119894=1nabla119909120593(119889119879

119894119909(119905) 120583(119905))

and 1205852(119905) = 119910(119905) minus 2119860

119879

(119860119909(119905) minus 119887) minus 120582sum119898

119894=1nabla119910120593(119889119879

119894119910(119905) 120583(119905))

From the expression of119875Ωand 119909(119905) 119910(119905) isin Ω for all 119905 ge 0

we get

⟨1205851(119905) minus 119875

Ω[1205851(119905)] 119909 (119905) minus 119910 (119905)⟩ = 0 forall119905 ge 0

⟨1205852(119905) minus 119875

Ω[1205852(119905)] 119909 (119905) minus 119910 (119905)⟩ = 0 forall119905 ge 0

(24)

Thus we have

⟨119909 (119905) minus 119910 (119905) 119875Ω[1205851(119905)] minus 119875

Ω[1205852(119905)]⟩

= ⟨119909 (119905) minus 119910 (119905) 1205851(119905) minus 120585

2(119905)⟩

=1003817100381710038171003817119909 (119905) minus 119910 (119905)

10038171003817100381710038172

minus 21003817100381710038171003817119860 (119909 (119905) minus 119910 (119905))

10038171003817100381710038172

minus 120582

119898

sum

119894=1

⟨119909 (119905) minus 119910 (119905) nabla119909120593 (119889119879

119894119909 (119905) 120583 (119905))

minusnabla119910120593 (119889119879

119894119910 (119905) 120583 (119905))⟩

(25)

Since120593(119889119879119894119910 120583) (119894 = 1 2 119898) is convex about119910 for any

fixed 120583 we have

⟨119909 minus 119910 nabla119909120593 (119889119879

119894119909 120583) minus nabla

119910120593 (119889119879

119894119910 120583)⟩ ge 0 forall119909 119910 isin R

119899

(26)

Thus for all 119905 ge 0119898

sum

119894=1

⟨119909 (119905) minus 119910 (119905) nabla119909120593 (119889119879

119894119909 (119905) 120583 (119905))

minusnabla119910120593 (119889119879

119894119910 (119905) 120583 (119905)) ⟩ ge 0

(27)

Using (25) and (27) into (23) we have

119889

119889119905

1

2

1003817100381710038171003817119909(119905) minus 119910(119910)10038171003817100381710038172

= minus21003817100381710038171003817119860(119909(119905) minus 119910(119905))

10038171003817100381710038172

minus⟨119909 (119905) minus 119910 (119905) 120582

119898

sum

119894=1

[nabla119909120593 (119889119879

119894119909 (119905) 120583 (119905))

minusnabla119910120593 (119889119879

119894119910 (119905) 120583 (119905))]⟩ le 0

forall119905 isin [0 +infin)

(28)

By integrating (28) from 0 to 119905 we derive that

sup119905ge0

1003817100381710038171003817119909 (119905) minus 119910 (119905)1003817100381710038171003817 le

10038171003817100381710038171199090 minus 1199100

1003817100381710038171003817 (29)

Therefore 119909(119905) = 119910(119905) for all 119905 ge 0 when 1199090= 1199100 which

derives the uniqueness of the solution of (14)Let 119910(119905) = 119909(119905 + ℎ) where ℎ gt 0 (29) implies that

119909 (119905 + ℎ) minus 119909 (119905) le 119909 (ℎ) minus 119909 (0) forall119905 ge 0 (30)

Therefore 119905 rarr (119905) is nonincreasing

Mathematical Problems in Engineering 5

From (20) we obtain that 119860119909(119905) minus 1198872

+120582sum119898

119894=1120593(119889119879

119894119909(119905)

120583(119905)) is nonincreasing and bounded form below on [0 +infin)therefore we have that

lim119905rarr+infin

[119860119909 (119905) minus 1198872

+ 120582

119898

sum

119894=1

120593 (119889119879

119894119909 (119905) 120583 (119905))] exists (31)

Using Proposition 5 to the above result we obtain that

lim119905rarr+infin

[119860119909 (119905) minus 1198872

+ 120582119863119909 (119905)1] exists (32)

Moreover we have that

lim119905rarr+infin

119889

119889119905[119860119909 (119905) minus 119887

2

+ 120582

119898

sum

119894=1

120593 (119889119879

119894119909 (119905) 120583 (119905))] = 0 (33)

Combining (20) and (33) we confirm that

lim119905rarr+infin

(119905) = 0 (34)

Since 119909(119905) is uniformly bounded on the global intervalthere is a cluster point of it denoted as 119909lowast which follows thatthere exists an increasing sequence 119905

119899such that

lim119899rarr+infin

119905119899= +infin lim

119899rarr+infin

119909 (119905119899) = 119909lowast

(35)

Using the expression of (14) and lim119905rarr+infin

(119905) = 0 wehave

119909lowast

= 119875Ω[119909lowast

minus lim119899rarr+infin

(2119860119879

(119860119909 (119905119899) minus 119887)

+120582

119898

sum

119894=1

nabla119909120593 (119889119879

119894119909 (119905119899) 120583 (119905

119899)))]

(36)

Using Proposition 3 in the above equation we have

⟨ lim119899rarr+infin

(2119860119879

(119860119909 (119905119899) minus 119887) + 120582

119898

sum

119894=1

nabla119909120593 (119889119879

119894119909 (119905119899) 120583 (119905

119899)))

V minus 119909lowast

⟩ ge 0 forallV isin Ω

(37)

From Proposition 5 there exists 120585 isin 120597[119860119909(119905) minus 1198872

+

120582119863119909(119905)1] such that

⟨120585 V minus 119909lowast

⟩ ge 0 forallV isin Ω (38)

Therefore 119909lowast is a Clarke stationary point of (4) Since (4)is a convex programming 119909lowast is an optimal solution of (4)Owning to the random cluster point of 119909(119905) we know that

any cluster point of 119909(119905) is an optimal solution of (4) whichmeans that the solution of (14) converges to the optimalsolution set of (4)

4 Numerical Experiments

In this section we will give two numerical experiments tovalidate the theoretical results obtained in this paper and thegood performance of the proposed neural network in solvingthe sparse reconstruction problems

Example 1 In this experiment we will test an experiment forthe signal recovered with noise Every original signal withsparsity 1 means that there is only one sound at time pointWe use the followingMATLAB codes to generate a 100 lengthoriginal signal 119909 isin R5times100 mixingmatrix119860 isin R5times4 observedsignal 119887 isin R4times100 and noise 119899 isin R4times100

s=zeros(5100)

for l=1100

q=randperm(5)s(q(12)l) = (2lowastrandn(21))

end

A=randn(35)

n = 005 lowast randn(3100)

b=Alowasts-n

We denote 119904lowast as the recovered signal using our methodFigures 1(a)-2(a) show the original observed and recoveredsignals using (14) Figure 2(b) presents the convergence ofsignal-to-noise ratio (SNR) along the solution of the pro-posed neural network From this figure we see that ourmethod recovers this random original effectively And weshould state that the SNR of the recovered signal is 2215 dBwhere

SNR =

119871

sum

119894=1

minus1

11987120lg(

1003817100381710038171003817119904lowast

(119897) minus 119904 (119897)10038171003817100381710038172

119904 (119897)2

) (39)

Example 2 In this experiment we perform the proposednetwork (14) on the restoration of 20 times 20 circle image Theobserved image is distorted from the unknown true imagemainly by two factors the blurring and the randomnoiseTheblurring is a 2119863 Gaussian function

ℎ (119894 119895) = 119890minus2(1198943)

2minus2(1198953)

2

(40)

which is truncated such that the function has a supportof 7 times 7 A Gaussian noise with zero mean and standardderivation of 005 dB is added to the blurred image Figures3(a) and 3(b) present the original and the observed imagesrespectively The peak signal-to-noise ratio (PSNR) of theobserved image is 1687 dB Denote 119909

119900and 119909

119887as the original

6 Mathematical Problems in Engineering

0 20 40 60 80 100

0

10

minus10

0 20 40 60 80 100

0

10

minus10

0 20 40 60 80 100

0

10

minus10

0

20 40 60 80 1000

5

minus5

(a)

0

5

10

0

5

0 20 40 60 80 100

0 20 40 60 80 100

0 20 40 60 80 100minus5

minus5

0

5

minus5

(b)

Figure 1 (a) Original signals (b) observed signals

0 20 40 60 80 100

0

10

minus10

0 20 40 60 80 100

0

10

minus10

0 20 40 60 80 100

0

10

minus10

0

20 40 60 80 1000

5

minus5

(a)

0 20 40 60 80 1005

10

15

20

25

30

35

40

45

(b)

Figure 2 (a) Recovered signals (b) the convergence of SNR(119909(119905))

and corresponding observed images and use the PSNR toevaluate the quality of the restored image that is

PSNR (119909) = minus10log10

1003817100381710038171003817119909 minus 119909119900

1003817100381710038171003817

20 times 20 (41)

We use problem (13) to recover this image where we let120582 = 0017 Ω = 119909 0 le 119909 le 119890 and

119863 = (1198711otimes 119868

119868 otimes 1198711

) with 1198711= (

1 minus1

1 minus1

d d1 minus1

) (42)

Choose 1199090= 119875Ω(119887) The recovered image by (14) with

1199090is figured in Figure 3(c) with PSNR = 1965 dB The

convergence of the objective value and PSNR(119909(119905)) along thesolution of (14) with initial point 119909

0is presented in Figures

4(a) and 4(b) From Figures 4(a) and 4(b) we find that theobjective value is monotonely decreasing and the PSNR ismonotonely increasing along the solution of (14)

5 Conclusion

Basing on the smoothing approximation technique and pro-jected gradient method we construct a neural network mod-eled by a differential equation to solve a class of constrainednonsmooth convex optimization problems which have wideapplications in sparse reconstruction The proposed networkhas a unique and bounded solution with any initial point in

Mathematical Problems in Engineering 7

2 4 6 8 10 12 14 16 18 20

2

4

6

8

10

12

14

16

18

20

(a)

2 4 6 8 10 12 14 16 18 20

2

4

6

8

10

12

14

16

18

20

(b)

2 4 6 8 10 12 14 16 18 20

2

4

6

8

10

12

14

16

18

20

(c)

Figure 3 (a) Original image (b) observed image (c) recovered image

0 10 20 30 40 50 60 70 80

192

21

11

12

13

14

15

16

17

18

f(x(t))

t

(a)

0 10 20 30 40 50 60 70 8017

175

18

185

19

195

20

205

PSNT(x(t))

t

(b)

Figure 4 (a) Convergence of the objective value (b) convergence of PSNR(119909(119905))

8 Mathematical Problems in Engineering

the feasible region Moreover the solution of proposed net-work converges to the solution set of the optimization prob-lem Simulation results on numerical examples are elaboratedupon to substantiate the effectiveness and performance of theneural network

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

Theauthors would like to thank the Editor-in-Chief ProfessorHuaiqin Wu and the three anonymous reviewers for theirinsightful and constructive comments which help to enrichthe content and improve the presentation of the results in thispaper This work is supported by the Fundamental ResearchFunds for the Central Universities (DL12EB04) and NationalNatural Science Foundation of China (31370565)

References

[1] Y C Eldar and M Mishali ldquoRobust recovery of signalsfrom a structured union of subspacesrdquo IEEE Transactions onInformation Theory vol 55 no 11 pp 5302ndash5316 2009

[2] R Saab O Yilmaz M J McKeown and R AbugharbiehldquoUnderdetermined anechoic blind source separation via l

119902-

basis-pursuit With qlt 1rdquo IEEE Transactions on Signal Process-ing vol 55 no 8 pp 4004ndash4017 2007

[3] D L Donoho ldquoCompressed sensingrdquo IEEE Transactions onInformation Theory vol 52 no 4 pp 1289ndash1306 2006

[4] L B Montefusco D Lazzaro and S Papi ldquoNonlinear filteringfor sparse signal recovery from incomplete measurementsrdquoIEEE Transactions on Signal Processing vol 57 no 7 pp 2494ndash2502 2009

[5] Y Xiang S K Ng andV KNguyen ldquoBlind separation ofmutu-ally correlated sources using precodersrdquo IEEE Transactions onNeural Networks vol 21 no 1 pp 82ndash90 2010

[6] M K Ng R H Chan and W-C Tang ldquoFast algorithm fordeblurringmodels with Neumann boundary conditionsrdquo SIAMJournal on Scientific Computing vol 21 no 3 pp 851ndash866 1999

[7] D L Donoho ldquoNeighborly polytopes and sparse solutions ofunderdetermined linear equationsrdquo Tech Rep Department ofStatistics Stanford University Standford Calif USA 2005

[8] R Tibshirani ldquoRegression shrinkage and selection via the lassordquoJournal of the Royal Statistical Society B vol 58 pp 267ndash2881996

[9] A Cichocki and R Unbehauen Neural Networks for Optimiza-tion and Signal Processing Wiley London UK 1993

[10] W Bian and X Xue ldquoSubgradient-based neural networks fornonsmooth nonconvex optimization problemsrdquo IEEE Transac-tions on Neural Networks vol 20 no 6 pp 1024ndash1038 2009

[11] Y Xia and J Wang ldquoA recurrent neural network for solvingnonlinear convex programs subject to linear constraintsrdquo IEEETransactions on Neural Networks vol 16 no 2 pp 379ndash3862005

[12] X B Gao and L Z Liao ldquoA new one-layer recurrent neuralnetwork for nonsmooth convex optimization subject to linear

equality constraintsrdquo IEEE Transactions on Neural Networksvol 21 pp 918ndash929 2010

[13] J J Hopfield and DW Tank ldquoNeural computation of decisionsin optimization problemsrdquo Biological Cybernetics vol 52 no 3pp 141ndash152 1985

[14] D W Tank and J J Hopfield ldquoSimple neural optimizationnetwork an AA converter signal decision circuit and alinear programming circuitrdquo IEEE Transactions on Circuits andSystems vol 33 no 5 pp 533ndash541 1986

[15] M P Kennedy and L O Chua ldquoNeural networks for nonlinearprogrammingrdquo IEEE Transactions on Circuits and Systems vol35 no 5 pp 554ndash562 1988

[16] E K P Chong S Hui and S H Zak ldquoAn analysis of a classof neural networks for solving linear programming problemsrdquoIEEE Transactions on Automatic Control vol 44 no 11 pp1995ndash2006 1999

[17] M Forti P Nistri and M Quincampoix ldquoGeneralized neuralnetwork for nonsmooth nonlinear programming problemsrdquoIEEE Transactions on Circuits and Systems I Regular Papers vol51 no 9 pp 1741ndash1754 2004

[18] X Xue and W Bian ldquoSubgradient-based neural networks fornonsmooth convex optimization problemsrdquo IEEE TransactionsonCircuits and Systems I Regular Papers vol 55 no 8 pp 2378ndash2391 2008

[19] XChenMKNg andC Zhang ldquoNonconvex lp-regularizationand box constrained model for image restorationrdquo IEEE Trans-actins on Image Processing vol 21 pp 4709ndash4721 2010

[20] W Bian and X Chen ldquoNeural network for nonsmooth non-convex constrained minimization via smooth approximationrdquoIEEE Transactions on Neural Networks and Learning Systemsvol 25 pp 545ndash556 2014

[21] X Chen ldquoSmoothing methods for complementarity problemsand their applications a surveyrdquo Journal of the OperationsResearch Society of Japan vol 43 no 1 pp 32ndash47 2000

[22] R T Rockafellar and RWets J-BVariational Analysis SpringerBerlin Germany 1998

[23] X Xue and W Bian ldquoA project neural network for solvingdegenerate convex quadratic programrdquo Neurocomputing vol70 no 13ndash15 pp 2449ndash2459 2007

[24] Q Liu and J Cao ldquoA recurrent neural network based onprojection operator for extended general variational inequal-itiesrdquo IEEE Transactions on Systems Man and Cybernetics BCybernetics vol 40 no 3 pp 928ndash938 2010

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 5: Research Article Neural Network for Sparse Reconstructiondownloads.hindawi.com/journals/mpe/2014/107620.pdfa class of nonsmooth convex optimization problems. In [ ], a neural network

Mathematical Problems in Engineering 5

From (20) we obtain that 119860119909(119905) minus 1198872

+120582sum119898

119894=1120593(119889119879

119894119909(119905)

120583(119905)) is nonincreasing and bounded form below on [0 +infin)therefore we have that

lim119905rarr+infin

[119860119909 (119905) minus 1198872

+ 120582

119898

sum

119894=1

120593 (119889119879

119894119909 (119905) 120583 (119905))] exists (31)

Using Proposition 5 to the above result we obtain that

lim119905rarr+infin

[119860119909 (119905) minus 1198872

+ 120582119863119909 (119905)1] exists (32)

Moreover we have that

lim119905rarr+infin

119889

119889119905[119860119909 (119905) minus 119887

2

+ 120582

119898

sum

119894=1

120593 (119889119879

119894119909 (119905) 120583 (119905))] = 0 (33)

Combining (20) and (33) we confirm that

lim119905rarr+infin

(119905) = 0 (34)

Since 119909(119905) is uniformly bounded on the global intervalthere is a cluster point of it denoted as 119909lowast which follows thatthere exists an increasing sequence 119905

119899such that

lim119899rarr+infin

119905119899= +infin lim

119899rarr+infin

119909 (119905119899) = 119909lowast

(35)

Using the expression of (14) and lim119905rarr+infin

(119905) = 0 wehave

119909lowast

= 119875Ω[119909lowast

minus lim119899rarr+infin

(2119860119879

(119860119909 (119905119899) minus 119887)

+120582

119898

sum

119894=1

nabla119909120593 (119889119879

119894119909 (119905119899) 120583 (119905

119899)))]

(36)

Using Proposition 3 in the above equation we have

⟨ lim119899rarr+infin

(2119860119879

(119860119909 (119905119899) minus 119887) + 120582

119898

sum

119894=1

nabla119909120593 (119889119879

119894119909 (119905119899) 120583 (119905

119899)))

V minus 119909lowast

⟩ ge 0 forallV isin Ω

(37)

From Proposition 5 there exists 120585 isin 120597[119860119909(119905) minus 1198872

+

120582119863119909(119905)1] such that

⟨120585 V minus 119909lowast

⟩ ge 0 forallV isin Ω (38)

Therefore 119909lowast is a Clarke stationary point of (4) Since (4)is a convex programming 119909lowast is an optimal solution of (4)Owning to the random cluster point of 119909(119905) we know that

any cluster point of 119909(119905) is an optimal solution of (4) whichmeans that the solution of (14) converges to the optimalsolution set of (4)

4 Numerical Experiments

In this section we will give two numerical experiments tovalidate the theoretical results obtained in this paper and thegood performance of the proposed neural network in solvingthe sparse reconstruction problems

Example 1 In this experiment we will test an experiment forthe signal recovered with noise Every original signal withsparsity 1 means that there is only one sound at time pointWe use the followingMATLAB codes to generate a 100 lengthoriginal signal 119909 isin R5times100 mixingmatrix119860 isin R5times4 observedsignal 119887 isin R4times100 and noise 119899 isin R4times100

s=zeros(5100)

for l=1100

q=randperm(5)s(q(12)l) = (2lowastrandn(21))

end

A=randn(35)

n = 005 lowast randn(3100)

b=Alowasts-n

We denote 119904lowast as the recovered signal using our methodFigures 1(a)-2(a) show the original observed and recoveredsignals using (14) Figure 2(b) presents the convergence ofsignal-to-noise ratio (SNR) along the solution of the pro-posed neural network From this figure we see that ourmethod recovers this random original effectively And weshould state that the SNR of the recovered signal is 2215 dBwhere

SNR =

119871

sum

119894=1

minus1

11987120lg(

1003817100381710038171003817119904lowast

(119897) minus 119904 (119897)10038171003817100381710038172

119904 (119897)2

) (39)

Example 2 In this experiment we perform the proposednetwork (14) on the restoration of 20 times 20 circle image Theobserved image is distorted from the unknown true imagemainly by two factors the blurring and the randomnoiseTheblurring is a 2119863 Gaussian function

ℎ (119894 119895) = 119890minus2(1198943)

2minus2(1198953)

2

(40)

which is truncated such that the function has a supportof 7 times 7 A Gaussian noise with zero mean and standardderivation of 005 dB is added to the blurred image Figures3(a) and 3(b) present the original and the observed imagesrespectively The peak signal-to-noise ratio (PSNR) of theobserved image is 1687 dB Denote 119909

119900and 119909

119887as the original

6 Mathematical Problems in Engineering

0 20 40 60 80 100

0

10

minus10

0 20 40 60 80 100

0

10

minus10

0 20 40 60 80 100

0

10

minus10

0

20 40 60 80 1000

5

minus5

(a)

0

5

10

0

5

0 20 40 60 80 100

0 20 40 60 80 100

0 20 40 60 80 100minus5

minus5

0

5

minus5

(b)

Figure 1 (a) Original signals (b) observed signals

0 20 40 60 80 100

0

10

minus10

0 20 40 60 80 100

0

10

minus10

0 20 40 60 80 100

0

10

minus10

0

20 40 60 80 1000

5

minus5

(a)

0 20 40 60 80 1005

10

15

20

25

30

35

40

45

(b)

Figure 2 (a) Recovered signals (b) the convergence of SNR(119909(119905))

and corresponding observed images and use the PSNR toevaluate the quality of the restored image that is

PSNR (119909) = minus10log10

1003817100381710038171003817119909 minus 119909119900

1003817100381710038171003817

20 times 20 (41)

We use problem (13) to recover this image where we let120582 = 0017 Ω = 119909 0 le 119909 le 119890 and

119863 = (1198711otimes 119868

119868 otimes 1198711

) with 1198711= (

1 minus1

1 minus1

d d1 minus1

) (42)

Choose 1199090= 119875Ω(119887) The recovered image by (14) with

1199090is figured in Figure 3(c) with PSNR = 1965 dB The

convergence of the objective value and PSNR(119909(119905)) along thesolution of (14) with initial point 119909

0is presented in Figures

4(a) and 4(b) From Figures 4(a) and 4(b) we find that theobjective value is monotonely decreasing and the PSNR ismonotonely increasing along the solution of (14)

5 Conclusion

Basing on the smoothing approximation technique and pro-jected gradient method we construct a neural network mod-eled by a differential equation to solve a class of constrainednonsmooth convex optimization problems which have wideapplications in sparse reconstruction The proposed networkhas a unique and bounded solution with any initial point in

Mathematical Problems in Engineering 7

2 4 6 8 10 12 14 16 18 20

2

4

6

8

10

12

14

16

18

20

(a)

2 4 6 8 10 12 14 16 18 20

2

4

6

8

10

12

14

16

18

20

(b)

2 4 6 8 10 12 14 16 18 20

2

4

6

8

10

12

14

16

18

20

(c)

Figure 3 (a) Original image (b) observed image (c) recovered image

0 10 20 30 40 50 60 70 80

192

21

11

12

13

14

15

16

17

18

f(x(t))

t

(a)

0 10 20 30 40 50 60 70 8017

175

18

185

19

195

20

205

PSNT(x(t))

t

(b)

Figure 4 (a) Convergence of the objective value (b) convergence of PSNR(119909(119905))

8 Mathematical Problems in Engineering

the feasible region Moreover the solution of proposed net-work converges to the solution set of the optimization prob-lem Simulation results on numerical examples are elaboratedupon to substantiate the effectiveness and performance of theneural network

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

Theauthors would like to thank the Editor-in-Chief ProfessorHuaiqin Wu and the three anonymous reviewers for theirinsightful and constructive comments which help to enrichthe content and improve the presentation of the results in thispaper This work is supported by the Fundamental ResearchFunds for the Central Universities (DL12EB04) and NationalNatural Science Foundation of China (31370565)

References

[1] Y C Eldar and M Mishali ldquoRobust recovery of signalsfrom a structured union of subspacesrdquo IEEE Transactions onInformation Theory vol 55 no 11 pp 5302ndash5316 2009

[2] R Saab O Yilmaz M J McKeown and R AbugharbiehldquoUnderdetermined anechoic blind source separation via l

119902-

basis-pursuit With qlt 1rdquo IEEE Transactions on Signal Process-ing vol 55 no 8 pp 4004ndash4017 2007

[3] D L Donoho ldquoCompressed sensingrdquo IEEE Transactions onInformation Theory vol 52 no 4 pp 1289ndash1306 2006

[4] L B Montefusco D Lazzaro and S Papi ldquoNonlinear filteringfor sparse signal recovery from incomplete measurementsrdquoIEEE Transactions on Signal Processing vol 57 no 7 pp 2494ndash2502 2009

[5] Y Xiang S K Ng andV KNguyen ldquoBlind separation ofmutu-ally correlated sources using precodersrdquo IEEE Transactions onNeural Networks vol 21 no 1 pp 82ndash90 2010

[6] M K Ng R H Chan and W-C Tang ldquoFast algorithm fordeblurringmodels with Neumann boundary conditionsrdquo SIAMJournal on Scientific Computing vol 21 no 3 pp 851ndash866 1999

[7] D L Donoho ldquoNeighborly polytopes and sparse solutions ofunderdetermined linear equationsrdquo Tech Rep Department ofStatistics Stanford University Standford Calif USA 2005

[8] R Tibshirani ldquoRegression shrinkage and selection via the lassordquoJournal of the Royal Statistical Society B vol 58 pp 267ndash2881996

[9] A Cichocki and R Unbehauen Neural Networks for Optimiza-tion and Signal Processing Wiley London UK 1993

[10] W Bian and X Xue ldquoSubgradient-based neural networks fornonsmooth nonconvex optimization problemsrdquo IEEE Transac-tions on Neural Networks vol 20 no 6 pp 1024ndash1038 2009

[11] Y Xia and J Wang ldquoA recurrent neural network for solvingnonlinear convex programs subject to linear constraintsrdquo IEEETransactions on Neural Networks vol 16 no 2 pp 379ndash3862005

[12] X B Gao and L Z Liao ldquoA new one-layer recurrent neuralnetwork for nonsmooth convex optimization subject to linear

equality constraintsrdquo IEEE Transactions on Neural Networksvol 21 pp 918ndash929 2010

[13] J J Hopfield and DW Tank ldquoNeural computation of decisionsin optimization problemsrdquo Biological Cybernetics vol 52 no 3pp 141ndash152 1985

[14] D W Tank and J J Hopfield ldquoSimple neural optimizationnetwork an AA converter signal decision circuit and alinear programming circuitrdquo IEEE Transactions on Circuits andSystems vol 33 no 5 pp 533ndash541 1986

[15] M P Kennedy and L O Chua ldquoNeural networks for nonlinearprogrammingrdquo IEEE Transactions on Circuits and Systems vol35 no 5 pp 554ndash562 1988

[16] E K P Chong S Hui and S H Zak ldquoAn analysis of a classof neural networks for solving linear programming problemsrdquoIEEE Transactions on Automatic Control vol 44 no 11 pp1995ndash2006 1999

[17] M Forti P Nistri and M Quincampoix ldquoGeneralized neuralnetwork for nonsmooth nonlinear programming problemsrdquoIEEE Transactions on Circuits and Systems I Regular Papers vol51 no 9 pp 1741ndash1754 2004

[18] X Xue and W Bian ldquoSubgradient-based neural networks fornonsmooth convex optimization problemsrdquo IEEE TransactionsonCircuits and Systems I Regular Papers vol 55 no 8 pp 2378ndash2391 2008

[19] XChenMKNg andC Zhang ldquoNonconvex lp-regularizationand box constrained model for image restorationrdquo IEEE Trans-actins on Image Processing vol 21 pp 4709ndash4721 2010

[20] W Bian and X Chen ldquoNeural network for nonsmooth non-convex constrained minimization via smooth approximationrdquoIEEE Transactions on Neural Networks and Learning Systemsvol 25 pp 545ndash556 2014

[21] X Chen ldquoSmoothing methods for complementarity problemsand their applications a surveyrdquo Journal of the OperationsResearch Society of Japan vol 43 no 1 pp 32ndash47 2000

[22] R T Rockafellar and RWets J-BVariational Analysis SpringerBerlin Germany 1998

[23] X Xue and W Bian ldquoA project neural network for solvingdegenerate convex quadratic programrdquo Neurocomputing vol70 no 13ndash15 pp 2449ndash2459 2007

[24] Q Liu and J Cao ldquoA recurrent neural network based onprojection operator for extended general variational inequal-itiesrdquo IEEE Transactions on Systems Man and Cybernetics BCybernetics vol 40 no 3 pp 928ndash938 2010

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 6: Research Article Neural Network for Sparse Reconstructiondownloads.hindawi.com/journals/mpe/2014/107620.pdfa class of nonsmooth convex optimization problems. In [ ], a neural network

6 Mathematical Problems in Engineering

0 20 40 60 80 100

0

10

minus10

0 20 40 60 80 100

0

10

minus10

0 20 40 60 80 100

0

10

minus10

0

20 40 60 80 1000

5

minus5

(a)

0

5

10

0

5

0 20 40 60 80 100

0 20 40 60 80 100

0 20 40 60 80 100minus5

minus5

0

5

minus5

(b)

Figure 1 (a) Original signals (b) observed signals

0 20 40 60 80 100

0

10

minus10

0 20 40 60 80 100

0

10

minus10

0 20 40 60 80 100

0

10

minus10

0

20 40 60 80 1000

5

minus5

(a)

0 20 40 60 80 1005

10

15

20

25

30

35

40

45

(b)

Figure 2 (a) Recovered signals (b) the convergence of SNR(119909(119905))

and corresponding observed images and use the PSNR toevaluate the quality of the restored image that is

PSNR (119909) = minus10log10

1003817100381710038171003817119909 minus 119909119900

1003817100381710038171003817

20 times 20 (41)

We use problem (13) to recover this image where we let120582 = 0017 Ω = 119909 0 le 119909 le 119890 and

119863 = (1198711otimes 119868

119868 otimes 1198711

) with 1198711= (

1 minus1

1 minus1

d d1 minus1

) (42)

Choose 1199090= 119875Ω(119887) The recovered image by (14) with

1199090is figured in Figure 3(c) with PSNR = 1965 dB The

convergence of the objective value and PSNR(119909(119905)) along thesolution of (14) with initial point 119909

0is presented in Figures

4(a) and 4(b) From Figures 4(a) and 4(b) we find that theobjective value is monotonely decreasing and the PSNR ismonotonely increasing along the solution of (14)

5 Conclusion

Basing on the smoothing approximation technique and pro-jected gradient method we construct a neural network mod-eled by a differential equation to solve a class of constrainednonsmooth convex optimization problems which have wideapplications in sparse reconstruction The proposed networkhas a unique and bounded solution with any initial point in

Mathematical Problems in Engineering 7

2 4 6 8 10 12 14 16 18 20

2

4

6

8

10

12

14

16

18

20

(a)

2 4 6 8 10 12 14 16 18 20

2

4

6

8

10

12

14

16

18

20

(b)

2 4 6 8 10 12 14 16 18 20

2

4

6

8

10

12

14

16

18

20

(c)

Figure 3 (a) Original image (b) observed image (c) recovered image

0 10 20 30 40 50 60 70 80

192

21

11

12

13

14

15

16

17

18

f(x(t))

t

(a)

0 10 20 30 40 50 60 70 8017

175

18

185

19

195

20

205

PSNT(x(t))

t

(b)

Figure 4 (a) Convergence of the objective value (b) convergence of PSNR(119909(119905))

8 Mathematical Problems in Engineering

the feasible region Moreover the solution of proposed net-work converges to the solution set of the optimization prob-lem Simulation results on numerical examples are elaboratedupon to substantiate the effectiveness and performance of theneural network

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

Theauthors would like to thank the Editor-in-Chief ProfessorHuaiqin Wu and the three anonymous reviewers for theirinsightful and constructive comments which help to enrichthe content and improve the presentation of the results in thispaper This work is supported by the Fundamental ResearchFunds for the Central Universities (DL12EB04) and NationalNatural Science Foundation of China (31370565)

References

[1] Y C Eldar and M Mishali ldquoRobust recovery of signalsfrom a structured union of subspacesrdquo IEEE Transactions onInformation Theory vol 55 no 11 pp 5302ndash5316 2009

[2] R Saab O Yilmaz M J McKeown and R AbugharbiehldquoUnderdetermined anechoic blind source separation via l

119902-

basis-pursuit With qlt 1rdquo IEEE Transactions on Signal Process-ing vol 55 no 8 pp 4004ndash4017 2007

[3] D L Donoho ldquoCompressed sensingrdquo IEEE Transactions onInformation Theory vol 52 no 4 pp 1289ndash1306 2006

[4] L B Montefusco D Lazzaro and S Papi ldquoNonlinear filteringfor sparse signal recovery from incomplete measurementsrdquoIEEE Transactions on Signal Processing vol 57 no 7 pp 2494ndash2502 2009

[5] Y Xiang S K Ng andV KNguyen ldquoBlind separation ofmutu-ally correlated sources using precodersrdquo IEEE Transactions onNeural Networks vol 21 no 1 pp 82ndash90 2010

[6] M K Ng R H Chan and W-C Tang ldquoFast algorithm fordeblurringmodels with Neumann boundary conditionsrdquo SIAMJournal on Scientific Computing vol 21 no 3 pp 851ndash866 1999

[7] D L Donoho ldquoNeighborly polytopes and sparse solutions ofunderdetermined linear equationsrdquo Tech Rep Department ofStatistics Stanford University Standford Calif USA 2005

[8] R Tibshirani ldquoRegression shrinkage and selection via the lassordquoJournal of the Royal Statistical Society B vol 58 pp 267ndash2881996

[9] A Cichocki and R Unbehauen Neural Networks for Optimiza-tion and Signal Processing Wiley London UK 1993

[10] W Bian and X Xue ldquoSubgradient-based neural networks fornonsmooth nonconvex optimization problemsrdquo IEEE Transac-tions on Neural Networks vol 20 no 6 pp 1024ndash1038 2009

[11] Y Xia and J Wang ldquoA recurrent neural network for solvingnonlinear convex programs subject to linear constraintsrdquo IEEETransactions on Neural Networks vol 16 no 2 pp 379ndash3862005

[12] X B Gao and L Z Liao ldquoA new one-layer recurrent neuralnetwork for nonsmooth convex optimization subject to linear

equality constraintsrdquo IEEE Transactions on Neural Networksvol 21 pp 918ndash929 2010

[13] J J Hopfield and DW Tank ldquoNeural computation of decisionsin optimization problemsrdquo Biological Cybernetics vol 52 no 3pp 141ndash152 1985

[14] D W Tank and J J Hopfield ldquoSimple neural optimizationnetwork an AA converter signal decision circuit and alinear programming circuitrdquo IEEE Transactions on Circuits andSystems vol 33 no 5 pp 533ndash541 1986

[15] M P Kennedy and L O Chua ldquoNeural networks for nonlinearprogrammingrdquo IEEE Transactions on Circuits and Systems vol35 no 5 pp 554ndash562 1988

[16] E K P Chong S Hui and S H Zak ldquoAn analysis of a classof neural networks for solving linear programming problemsrdquoIEEE Transactions on Automatic Control vol 44 no 11 pp1995ndash2006 1999

[17] M Forti P Nistri and M Quincampoix ldquoGeneralized neuralnetwork for nonsmooth nonlinear programming problemsrdquoIEEE Transactions on Circuits and Systems I Regular Papers vol51 no 9 pp 1741ndash1754 2004

[18] X Xue and W Bian ldquoSubgradient-based neural networks fornonsmooth convex optimization problemsrdquo IEEE TransactionsonCircuits and Systems I Regular Papers vol 55 no 8 pp 2378ndash2391 2008

[19] XChenMKNg andC Zhang ldquoNonconvex lp-regularizationand box constrained model for image restorationrdquo IEEE Trans-actins on Image Processing vol 21 pp 4709ndash4721 2010

[20] W Bian and X Chen ldquoNeural network for nonsmooth non-convex constrained minimization via smooth approximationrdquoIEEE Transactions on Neural Networks and Learning Systemsvol 25 pp 545ndash556 2014

[21] X Chen ldquoSmoothing methods for complementarity problemsand their applications a surveyrdquo Journal of the OperationsResearch Society of Japan vol 43 no 1 pp 32ndash47 2000

[22] R T Rockafellar and RWets J-BVariational Analysis SpringerBerlin Germany 1998

[23] X Xue and W Bian ldquoA project neural network for solvingdegenerate convex quadratic programrdquo Neurocomputing vol70 no 13ndash15 pp 2449ndash2459 2007

[24] Q Liu and J Cao ldquoA recurrent neural network based onprojection operator for extended general variational inequal-itiesrdquo IEEE Transactions on Systems Man and Cybernetics BCybernetics vol 40 no 3 pp 928ndash938 2010

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 7: Research Article Neural Network for Sparse Reconstructiondownloads.hindawi.com/journals/mpe/2014/107620.pdfa class of nonsmooth convex optimization problems. In [ ], a neural network

Mathematical Problems in Engineering 7

2 4 6 8 10 12 14 16 18 20

2

4

6

8

10

12

14

16

18

20

(a)

2 4 6 8 10 12 14 16 18 20

2

4

6

8

10

12

14

16

18

20

(b)

2 4 6 8 10 12 14 16 18 20

2

4

6

8

10

12

14

16

18

20

(c)

Figure 3 (a) Original image (b) observed image (c) recovered image

0 10 20 30 40 50 60 70 80

192

21

11

12

13

14

15

16

17

18

f(x(t))

t

(a)

0 10 20 30 40 50 60 70 8017

175

18

185

19

195

20

205

PSNT(x(t))

t

(b)

Figure 4 (a) Convergence of the objective value (b) convergence of PSNR(119909(119905))

8 Mathematical Problems in Engineering

the feasible region Moreover the solution of proposed net-work converges to the solution set of the optimization prob-lem Simulation results on numerical examples are elaboratedupon to substantiate the effectiveness and performance of theneural network

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

Theauthors would like to thank the Editor-in-Chief ProfessorHuaiqin Wu and the three anonymous reviewers for theirinsightful and constructive comments which help to enrichthe content and improve the presentation of the results in thispaper This work is supported by the Fundamental ResearchFunds for the Central Universities (DL12EB04) and NationalNatural Science Foundation of China (31370565)

References

[1] Y C Eldar and M Mishali ldquoRobust recovery of signalsfrom a structured union of subspacesrdquo IEEE Transactions onInformation Theory vol 55 no 11 pp 5302ndash5316 2009

[2] R Saab O Yilmaz M J McKeown and R AbugharbiehldquoUnderdetermined anechoic blind source separation via l

119902-

basis-pursuit With qlt 1rdquo IEEE Transactions on Signal Process-ing vol 55 no 8 pp 4004ndash4017 2007

[3] D L Donoho ldquoCompressed sensingrdquo IEEE Transactions onInformation Theory vol 52 no 4 pp 1289ndash1306 2006

[4] L B Montefusco D Lazzaro and S Papi ldquoNonlinear filteringfor sparse signal recovery from incomplete measurementsrdquoIEEE Transactions on Signal Processing vol 57 no 7 pp 2494ndash2502 2009

[5] Y Xiang S K Ng andV KNguyen ldquoBlind separation ofmutu-ally correlated sources using precodersrdquo IEEE Transactions onNeural Networks vol 21 no 1 pp 82ndash90 2010

[6] M K Ng R H Chan and W-C Tang ldquoFast algorithm fordeblurringmodels with Neumann boundary conditionsrdquo SIAMJournal on Scientific Computing vol 21 no 3 pp 851ndash866 1999

[7] D L Donoho ldquoNeighborly polytopes and sparse solutions ofunderdetermined linear equationsrdquo Tech Rep Department ofStatistics Stanford University Standford Calif USA 2005

[8] R Tibshirani ldquoRegression shrinkage and selection via the lassordquoJournal of the Royal Statistical Society B vol 58 pp 267ndash2881996

[9] A Cichocki and R Unbehauen Neural Networks for Optimiza-tion and Signal Processing Wiley London UK 1993

[10] W Bian and X Xue ldquoSubgradient-based neural networks fornonsmooth nonconvex optimization problemsrdquo IEEE Transac-tions on Neural Networks vol 20 no 6 pp 1024ndash1038 2009

[11] Y Xia and J Wang ldquoA recurrent neural network for solvingnonlinear convex programs subject to linear constraintsrdquo IEEETransactions on Neural Networks vol 16 no 2 pp 379ndash3862005

[12] X B Gao and L Z Liao ldquoA new one-layer recurrent neuralnetwork for nonsmooth convex optimization subject to linear

equality constraintsrdquo IEEE Transactions on Neural Networksvol 21 pp 918ndash929 2010

[13] J J Hopfield and DW Tank ldquoNeural computation of decisionsin optimization problemsrdquo Biological Cybernetics vol 52 no 3pp 141ndash152 1985

[14] D W Tank and J J Hopfield ldquoSimple neural optimizationnetwork an AA converter signal decision circuit and alinear programming circuitrdquo IEEE Transactions on Circuits andSystems vol 33 no 5 pp 533ndash541 1986

[15] M P Kennedy and L O Chua ldquoNeural networks for nonlinearprogrammingrdquo IEEE Transactions on Circuits and Systems vol35 no 5 pp 554ndash562 1988

[16] E K P Chong S Hui and S H Zak ldquoAn analysis of a classof neural networks for solving linear programming problemsrdquoIEEE Transactions on Automatic Control vol 44 no 11 pp1995ndash2006 1999

[17] M Forti P Nistri and M Quincampoix ldquoGeneralized neuralnetwork for nonsmooth nonlinear programming problemsrdquoIEEE Transactions on Circuits and Systems I Regular Papers vol51 no 9 pp 1741ndash1754 2004

[18] X Xue and W Bian ldquoSubgradient-based neural networks fornonsmooth convex optimization problemsrdquo IEEE TransactionsonCircuits and Systems I Regular Papers vol 55 no 8 pp 2378ndash2391 2008

[19] XChenMKNg andC Zhang ldquoNonconvex lp-regularizationand box constrained model for image restorationrdquo IEEE Trans-actins on Image Processing vol 21 pp 4709ndash4721 2010

[20] W Bian and X Chen ldquoNeural network for nonsmooth non-convex constrained minimization via smooth approximationrdquoIEEE Transactions on Neural Networks and Learning Systemsvol 25 pp 545ndash556 2014

[21] X Chen ldquoSmoothing methods for complementarity problemsand their applications a surveyrdquo Journal of the OperationsResearch Society of Japan vol 43 no 1 pp 32ndash47 2000

[22] R T Rockafellar and RWets J-BVariational Analysis SpringerBerlin Germany 1998

[23] X Xue and W Bian ldquoA project neural network for solvingdegenerate convex quadratic programrdquo Neurocomputing vol70 no 13ndash15 pp 2449ndash2459 2007

[24] Q Liu and J Cao ldquoA recurrent neural network based onprojection operator for extended general variational inequal-itiesrdquo IEEE Transactions on Systems Man and Cybernetics BCybernetics vol 40 no 3 pp 928ndash938 2010

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 8: Research Article Neural Network for Sparse Reconstructiondownloads.hindawi.com/journals/mpe/2014/107620.pdfa class of nonsmooth convex optimization problems. In [ ], a neural network

8 Mathematical Problems in Engineering

the feasible region Moreover the solution of proposed net-work converges to the solution set of the optimization prob-lem Simulation results on numerical examples are elaboratedupon to substantiate the effectiveness and performance of theneural network

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

Theauthors would like to thank the Editor-in-Chief ProfessorHuaiqin Wu and the three anonymous reviewers for theirinsightful and constructive comments which help to enrichthe content and improve the presentation of the results in thispaper This work is supported by the Fundamental ResearchFunds for the Central Universities (DL12EB04) and NationalNatural Science Foundation of China (31370565)

References

[1] Y C Eldar and M Mishali ldquoRobust recovery of signalsfrom a structured union of subspacesrdquo IEEE Transactions onInformation Theory vol 55 no 11 pp 5302ndash5316 2009

[2] R Saab O Yilmaz M J McKeown and R AbugharbiehldquoUnderdetermined anechoic blind source separation via l

119902-

basis-pursuit With qlt 1rdquo IEEE Transactions on Signal Process-ing vol 55 no 8 pp 4004ndash4017 2007

[3] D L Donoho ldquoCompressed sensingrdquo IEEE Transactions onInformation Theory vol 52 no 4 pp 1289ndash1306 2006

[4] L B Montefusco D Lazzaro and S Papi ldquoNonlinear filteringfor sparse signal recovery from incomplete measurementsrdquoIEEE Transactions on Signal Processing vol 57 no 7 pp 2494ndash2502 2009

[5] Y Xiang S K Ng andV KNguyen ldquoBlind separation ofmutu-ally correlated sources using precodersrdquo IEEE Transactions onNeural Networks vol 21 no 1 pp 82ndash90 2010

[6] M K Ng R H Chan and W-C Tang ldquoFast algorithm fordeblurringmodels with Neumann boundary conditionsrdquo SIAMJournal on Scientific Computing vol 21 no 3 pp 851ndash866 1999

[7] D L Donoho ldquoNeighborly polytopes and sparse solutions ofunderdetermined linear equationsrdquo Tech Rep Department ofStatistics Stanford University Standford Calif USA 2005

[8] R Tibshirani ldquoRegression shrinkage and selection via the lassordquoJournal of the Royal Statistical Society B vol 58 pp 267ndash2881996

[9] A Cichocki and R Unbehauen Neural Networks for Optimiza-tion and Signal Processing Wiley London UK 1993

[10] W Bian and X Xue ldquoSubgradient-based neural networks fornonsmooth nonconvex optimization problemsrdquo IEEE Transac-tions on Neural Networks vol 20 no 6 pp 1024ndash1038 2009

[11] Y Xia and J Wang ldquoA recurrent neural network for solvingnonlinear convex programs subject to linear constraintsrdquo IEEETransactions on Neural Networks vol 16 no 2 pp 379ndash3862005

[12] X B Gao and L Z Liao ldquoA new one-layer recurrent neuralnetwork for nonsmooth convex optimization subject to linear

equality constraintsrdquo IEEE Transactions on Neural Networksvol 21 pp 918ndash929 2010

[13] J J Hopfield and DW Tank ldquoNeural computation of decisionsin optimization problemsrdquo Biological Cybernetics vol 52 no 3pp 141ndash152 1985

[14] D W Tank and J J Hopfield ldquoSimple neural optimizationnetwork an AA converter signal decision circuit and alinear programming circuitrdquo IEEE Transactions on Circuits andSystems vol 33 no 5 pp 533ndash541 1986

[15] M P Kennedy and L O Chua ldquoNeural networks for nonlinearprogrammingrdquo IEEE Transactions on Circuits and Systems vol35 no 5 pp 554ndash562 1988

[16] E K P Chong S Hui and S H Zak ldquoAn analysis of a classof neural networks for solving linear programming problemsrdquoIEEE Transactions on Automatic Control vol 44 no 11 pp1995ndash2006 1999

[17] M Forti P Nistri and M Quincampoix ldquoGeneralized neuralnetwork for nonsmooth nonlinear programming problemsrdquoIEEE Transactions on Circuits and Systems I Regular Papers vol51 no 9 pp 1741ndash1754 2004

[18] X Xue and W Bian ldquoSubgradient-based neural networks fornonsmooth convex optimization problemsrdquo IEEE TransactionsonCircuits and Systems I Regular Papers vol 55 no 8 pp 2378ndash2391 2008

[19] XChenMKNg andC Zhang ldquoNonconvex lp-regularizationand box constrained model for image restorationrdquo IEEE Trans-actins on Image Processing vol 21 pp 4709ndash4721 2010

[20] W Bian and X Chen ldquoNeural network for nonsmooth non-convex constrained minimization via smooth approximationrdquoIEEE Transactions on Neural Networks and Learning Systemsvol 25 pp 545ndash556 2014

[21] X Chen ldquoSmoothing methods for complementarity problemsand their applications a surveyrdquo Journal of the OperationsResearch Society of Japan vol 43 no 1 pp 32ndash47 2000

[22] R T Rockafellar and RWets J-BVariational Analysis SpringerBerlin Germany 1998

[23] X Xue and W Bian ldquoA project neural network for solvingdegenerate convex quadratic programrdquo Neurocomputing vol70 no 13ndash15 pp 2449ndash2459 2007

[24] Q Liu and J Cao ldquoA recurrent neural network based onprojection operator for extended general variational inequal-itiesrdquo IEEE Transactions on Systems Man and Cybernetics BCybernetics vol 40 no 3 pp 928ndash938 2010

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 9: Research Article Neural Network for Sparse Reconstructiondownloads.hindawi.com/journals/mpe/2014/107620.pdfa class of nonsmooth convex optimization problems. In [ ], a neural network

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of