14
Research Article An Efficient Method for Convex Constrained Rank Minimization Problems Based on DC Programming Wanping Yang, Jinkai Zhao, and Fengmin Xu School of Economics and Finance, Xi’an Jiaotong University, Xi’an 710061, China Correspondence should be addressed to Fengmin Xu; [email protected] Received 26 January 2016; Revised 19 May 2016; Accepted 2 June 2016 Academic Editor: Srdjan Stankovic Copyright © 2016 Wanping Yang et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. e constrained rank minimization problem has various applications in many fields including machine learning, control, and signal processing. In this paper, we consider the convex constrained rank minimization problem. By introducing a new variable and penalizing an equality constraint to objective function, we reformulate the convex objective function with a rank constraint as a difference of convex functions based on the closed-form solutions, which can be reformulated as DC programming. A stepwise linear approximative algorithm is provided for solving the reformulated model. e performance of our method is tested by applying it to affine rank minimization problems and max-cut problems. Numerical results demonstrate that the method is effective and of high recoverability and results on max-cut show that the method is feasible, which provides better lower bounds and lower rank solutions compared with improved approximation algorithm using semidefinite programming, and they are close to the results of the latest researches. 1. Introduction In recent decades, with the increase of the system acquisition and data services, the explosion of data poses challenges to storage, transmission, and processing, as well as device design. e burgeoning theory of sparse recovery reveals the outstanding sensing ability on the large scales of high- dimensional data. Recently, a new theory called compressed sensing (or compressive sampling (CS)) has appeared. And this theory is praised as a big deal in signal processing area. is approach can acquire the signals while compressing data properly. Its sampling frequency is lower than that of Nyquist, which makes the collections of high resolution signal possible. One noticeable merit of this approach is its ability to combine traditional data collection and data compression into a union when facing sparse representation signals. is merit means that sampling frequency of signals, time and computational cost consuming on data processing, and expense for data storage and transmission are all greatly reduced. us this approach leads signal processing to a new age. However, same with CS, Low Rank Matrix theory can also solve signal reconstruction problems that are related to theory and practice. It is also carried out through exerting dimension reduction treatments on unknown multidimen- sion signals validly. And then use sparse concepts in a more broad sense to the problems. at is to say, reconstruct the original high-dimensional signals array completely or approximately by a small number of values measured by sparse (or reducing dimensions) sampling. Considering that both of these two theories are closely related to intrinsic sparsity of signals, we generally view them as sparse recovery problems. e main goal of sparse recovery problems is using less linear measurement data to recovery high-dimensional sparse signal. Its core mainly includes three steps: signal sparse representation, linear dimension reduction measure, and the nonlinear reconstruction of signal. e three parts for the sparse recovery theory are closely related; each part will have a significant impact on the quality of signal reconstruc- tion. However, the nonlinear reconstruction of signal can be seen as a special case of rank minimization problem to be solved. Constrained rank minimization problems have attracted a great deal of attention over recent years, which aim to find sparse solutions of a system. e problems are widely applicable in a variety of fields including machine learning [1, 2], control [3, 4], Euclidean embedding [5], image processing Hindawi Publishing Corporation Mathematical Problems in Engineering Volume 2016, Article ID 7473041, 13 pages http://dx.doi.org/10.1155/2016/7473041

Research Article An Efficient Method for Convex

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Research ArticleAn Efficient Method for Convex Constrained RankMinimization Problems Based on DC Programming

Wanping Yang Jinkai Zhao and Fengmin Xu

School of Economics and Finance Xirsquoan Jiaotong University Xirsquoan 710061 China

Correspondence should be addressed to Fengmin Xu fengminxumailxjtueducn

Received 26 January 2016 Revised 19 May 2016 Accepted 2 June 2016

Academic Editor Srdjan Stankovic

Copyright copy 2016 Wanping Yang et al This is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

The constrained rankminimization problemhas various applications inmany fields includingmachine learning control and signalprocessing In this paper we consider the convex constrained rank minimization problem By introducing a new variable andpenalizing an equality constraint to objective function we reformulate the convex objective function with a rank constraint asa difference of convex functions based on the closed-form solutions which can be reformulated as DC programming A stepwiselinear approximative algorithm is provided for solving the reformulatedmodelTheperformance of ourmethod is tested by applyingit to affine rank minimization problems and max-cut problems Numerical results demonstrate that the method is effective and ofhigh recoverability and results on max-cut show that the method is feasible which provides better lower bounds and lower ranksolutions compared with improved approximation algorithm using semidefinite programming and they are close to the results ofthe latest researches

1 Introduction

In recent decades with the increase of the system acquisitionand data services the explosion of data poses challenges tostorage transmission and processing as well as devicedesign The burgeoning theory of sparse recovery revealsthe outstanding sensing ability on the large scales of high-dimensional data Recently a new theory called compressedsensing (or compressive sampling (CS)) has appeared Andthis theory is praised as a big deal in signal processing areaThis approach can acquire the signals while compressingdata properly Its sampling frequency is lower than that ofNyquist whichmakes the collections of high resolution signalpossible One noticeable merit of this approach is its abilityto combine traditional data collection and data compressioninto a union when facing sparse representation signals Thismerit means that sampling frequency of signals time andcomputational cost consuming on data processing andexpense for data storage and transmission are all greatlyreduced Thus this approach leads signal processing to a newage However same with CS Low Rank Matrix theory canalso solve signal reconstruction problems that are related totheory and practice It is also carried out through exerting

dimension reduction treatments on unknown multidimen-sion signals validly And then use sparse concepts in a morebroad sense to the problems That is to say reconstructthe original high-dimensional signals array completely orapproximately by a small number of values measured bysparse (or reducing dimensions) sampling Considering thatboth of these two theories are closely related to intrinsicsparsity of signals we generally view them as sparse recoveryproblemsThemain goal of sparse recovery problems is usingless linear measurement data to recovery high-dimensionalsparse signal Its core mainly includes three steps signalsparse representation linear dimension reduction measureand the nonlinear reconstruction of signalThe three parts forthe sparse recovery theory are closely related each part willhave a significant impact on the quality of signal reconstruc-tion However the nonlinear reconstruction of signal can beseen as a special case of rank minimization problem to besolved

Constrained rank minimization problems have attracteda great deal of attention over recent years which aim tofind sparse solutions of a system The problems are widelyapplicable in a variety of fields including machine learning [12] control [3 4] Euclidean embedding [5] image processing

Hindawi Publishing CorporationMathematical Problems in EngineeringVolume 2016 Article ID 7473041 13 pageshttpdxdoiorg10115520167473041

2 Mathematical Problems in Engineering

[6] and finance [7 8] to name but a few There are sometypical examples of constrained rankminimization problemsfor example one is affinematrix rankminimization problems[5 9 10]

min 1

2A (119883) minus 119887

2

119865

st rank (119883) le 119896(1)

where 119883 isin 119877119898times119899 is the decision variable and the linear

map A 119877119898times119899

rarr 119877119901 and vector 119887 isin 119877

119901 are knownEquation (1) includes matrix completion based on affinerank minimization problem and compressed sensing Matrixcompletion is widely used in collaborative filtering [1112] system identification [13 14] sensor network [15 16]image processing [17 18] sparse channel estimation [19 20]spectrum sensing [21 22] and multimedia coding [23 24]Another is max-cut problems [25] in combinatorial optimi-zation

min Tr (119871119883)

st diag (119883) = 119890

rank (119883) = 1

119883 ⪰ 0

(2)

where119883 = 119909119909119879 119909 = (1199091 119909

119899)119879 119871 = (14)(diag(119882119890) minus119882)

119890 = (1 1 1)119879 119882 is the weight matrix and so on As a

matter of fact these problems can be written as a commonconvex constrained rank minimization model that is

min 119891 (119883)

st rank (119883) le 119896

119883 isin Γ cap Ω

(3)

where 119891 119877119898times119899 rarr 119877 is a continuously differentiable convexfunction rank(119883) is the rank of a matrix 119883 119896 ge 0 is a giveninteger Γ is a closed convex set and Ω is a closed unitarilyinvariant convex set According to [26] a set 119862 sube 119877119899times119899 is aunitarily invariant set if

119880119883119881119880 isin 119880119898 119881 isin 119880119899 119883 isin 119862 = 119862 (4)

where 119880119899 denotes the set of all unitary matrices in 119877119899times119899Various types of methods have been proposed to solve (3) orits special cases with different types of Γ cap Ω

When Γ cap Ω is 119877119898times119899 (3) is the unconstrained rankminimization problem One approach is to solve this case byconvex relaxation of rank(sdot) Cai et al [9] proposed singularvalue thresholding (SVT) algorithm Ma et al [10] proposedfixed point continuation (FPC) algorithm Another approachis to solve rank constraint directly Jain et al [27] proposedsimple and efficient algorithm Singular Value Projection(SVP) based on the projected gradient algorithm Haldarand Hernando [28] proposed an alternating least squaresapproach Keshavan et al [29] proposed an algorithm basedon optimization over a Grassmann manifold

When Γ cap Ω is a convex set (3) is a convex constrainedrank minimization problem One approach is to solve itby replacing rank constraint with other constraints such astrace constraint and norm constraint Based on this severalheuristicmethods have been proposed in literature for exam-ple Nesterov et al [30] proposed interior-point polynomialalgorithms Weimin [31] proposed an adaptive seminuclearnorm regularization approach in addition semidefinite pro-gramming (SDP) has also been applied see [32] Anotherapproach is to solve rank constraint directly Grigoriadisand Beran [33] proposed alternating projection algorithmsfor linear matrix inequalities problems Mohseni et al [24]proposed penalty decomposition (PD) method based onpenalty function Gao and Sun [34] recently proposed themajorized penalty approach Burer et al [35] proposednonlinear programming (NLP) reformulation approach

In this paper we consider problem (3) We introduce anew variable and penalize equality constraints to objectivefunction By this technique we can obtain closed-formsolution and use it to reformulate the objective function inproblem (3) as a difference of convex functions then (3) isreformulated as DC programming which can be solved vialinear approximation method A function is called DC if itcan be represented as the difference of two convex functionsMathematical programming problems dealingwithDC func-tions are called DC programming problems Our method isdifferent from original PD [26] for one thing we penalizeequality constraints to objective function and keep otherconstraints while PD penalizes all except rank constraintto objective function including equality and inequality con-straints for another each subproblem of PD is approximatelysolved by a block coordinate descent method while weapproximate problem (3) to a convex optimization to besolved finally Compared with PD method our method usesthe closed-form solutions to remove rank constraint whilePD needs to consider rank constraint in each iteration It isknown that rank constraint is a nonconvex and discontinuousconstraint which is not easy to deal And due to theremaining convex constraints the formulated programmingcan be solved by solving a series of convex programmingIn our method we use linear function instead of the lastconvex function to formulate the programming to convexprogramming We test the performance of our method byapplying it to affine rank minimization problems and max-cut problems Numerical results show that the algorithm isfeasible and effective

The rest of this paper is organized as follows In Section 11we introduce the notation that is used throughout the paperOur method is shown in Section 2 In Section 3 we apply itto some practical problems to test its performance

11 Notations In this paper the symbol 119877119899 denotes the 119899-dimensional Euclidean space and the set of all119898times119899matriceswith real entries is denoted by 119877119898times119899 Given matrices119883 and 119884in 119877119898times119899 the standard inner product is defined by ⟨119883 119884⟩ flTr(119883119884119879) where Tr(sdot) denotes the trace of a matrix TheFrobenius norm of a real matrix 119883 is defined as 119883

119865fl

radicTr(119883119883119879) 119883lowastdenotes the nuclear norm of 119883 that is the

sumof singular values of119883The rank of amatrix119883 is denoted

Mathematical Problems in Engineering 3

by rank(119883) We use 119868119902to denote the identity matrix whose

dimension is 119902Throughout this paper we always assume thatthe singular values are arranged in nonincreasing order thatis 1205901ge 1205902ge sdot sdot sdot ge 120590

119903gt 0 = 120590

119903+1= sdot sdot sdot = 120590min119898119899 120597119891 denotes

the subdifferential of the function 119891 We denote the adjointof 119860 by 119860lowast 119860119860119879

(119896)is used to denote Ky Fan 119896-norms Let

119875Ωbe the projection onto the closed convex setΩ

2 An Efficient Algorithm Based onDC Programming

In this section we first reformulated problem (3) as aDCpro-gramming problem with the convex constrained set whichcan be solved via stepwise linear approximation methodThen the convergence is provided

21 Model Transformation Let119883 = 119884 Problem (3) is rewrit-ten as

min 119891 (119883)

st 119883 = 119884

rank (119884) le 119896

119883 isin Γ cap Ω

(5)

Adopting penalty function method to penalize 119883 = 119884 toobjective function and choosing proper penalty parameter 120588then (5) can be reformulated as

min 119891 (119883) +120588

2119883 minus 119884

2

119865

st rank (119884) le 119896

119883 isin Γ cap Ω

(6)

Solving 119884 with fixed119883 isin Γ cap Ω (6) can be treated as

min119883

119891 (119883) + minrank(119884)le119896

120588

2119883 minus 119884

2

119865

st 119883 isin Γ cap Ω

(7)

In fact given the singular value decomposition 119883 = 119880119878119881119879and singular values of 119883 120590

119895(119883) 119895 = 1 min119898 119899 where

119880 isin 119877119898times119898and119881 isin 119877119899times119899 are orthogonalmatrices theminimal

optimal problem with respect to 119884 with 119883 fixed in (7) thatis minrank(119884)le119896(1205882)119883 minus 119884

2

119865 has closed-form solution 119884 =

119880119862119881119879 with

119862 = (

119863 0

0 0) (8)

where 119863 = diag(1205901 1205902 1205903 120590

119896) isin 119877

119896times119896 Thus by theclosed-form solution and the unitary invariance of 119865-normwe have

minrank(119884)le119896

120588

2119883 minus 119884

2

119865= min

rank(119884)le119896

120588

2

10038171003817100381710038171003817119880119879119883119881 minus 119880

11987911988411988110038171003817100381710038171003817

2

119865

=120588

2119878 minus 119862

2

119865

=120588

21198832

119865minus120588

2

119896

sum

119894=1

1205902

119894(119883)

(9)

Substituting the above into problem (7) problem (3) is refor-mulated as

min 119891 (119883) +120588

21198832

119865minus120588

2

119896

sum

119894=1

1205902

119894(119883)

st 119883 isin Γ cap Ω

(10)

Clearly 119891(119883) + (1205882)1198832119865is a convex function about 119883

Define function 119892 119877119898times119899 rarr 119877 119892(119883) = sum119896119894=11205902

119894(119883) 1 le 119896 le

min119898 119899 Next we will study the property of 119892(119883) Problem(3) would be reformulated as a DC programming problemwith convex constraints if 119892(119883) is a convex function whichcan be solved via stepwise linear approximation method Infact it does hold and 119892(119883) equals a trace function

Theorem 1 Let

119882 = (

119868119896119874

119874 119874

) isin 119877119898times119898

(11)

where 119874 denotes zero matrix whose dimension is determineddepending on context then for any 119883 = 119880119878119881

119879isin 119877119898times119899 the

following hold

(1) 119892(119883) is a convex function

(2) 119892(119883) = Tr(119883119879119880119882119880119879119883)

Proof (1) Since 1205902119895(119883) 119895 = 1 min(119898 119899) are eigenvalues

of119883119883119879 then 119892(119883) = sum119896119894=11205902

119894(119883) = 119883119883

119879(119896) where sdot

(119896)is

Ky Fan 119896-norms (defined in [36]) Clearly sdot (119896)

is a convexfunction Let 120593(sdot) = sdot

(119896) 1 le 119896 le min119898 119899 For forall119860 119861 isin

119877119898times119899 and 120572 isin [0 1] we have

119892 (120572119860 + (1 minus 120572) 119861)

= 120593 ((120572119860 + (1 minus 120572) 119861) (120572119860 + (1 minus 120572) 119861)119879)

= 120593 (1205722119860119860119879+ 120572 (1 minus 120572)119860119861

119879+ 120572 (1 minus 120572) 119861119860

119879

+ (1 minus 120572)2119861119861119879) le 1205722120593 (119860119860

119879) + 120572 (1 minus 120572)

sdot 120593 (119860119861119879) + 120572 (1 minus 120572) 120593 (119861119860

119879) + (1 minus 120572)

2120593 (119861119861

119879)

(12)

4 Mathematical Problems in Engineering

Note that 119861119860119879 = (119860119861119879)119879 thus they have the same nonzerosingular values and then 120593(119861119860119879) = 120593(119860119861119879) holds Togetherwith (12) we have

120593 ((120572119860 + (1 minus 120572) 119861) (120572119860 + (1 minus 120572) 119861)119879)

le 1205722120593 (119860119860

119879) + 2120572 (1 minus 120572) 120593 (119860119861

119879)

+ (1 minus 120572)2120593 (119861119861

119879)

(13)

It is known that for arbitrary matrices 119860 119861 isin 119877119898times119898

120590119895(119860lowast119861) le

1

2120590119895(119860lowast119860 + 119861

lowast119861)

1 le 119895 le 119898 (see Theorem IX42 in [36]) (14)

Further

119896

sum

119895=1

120590119895(119860119879119861) le

119896

sum

119895=1

1

2120590119895(119860lowast119860 + 119861

lowast119861) 1 le 119896 le 119899 (15)

That is

120593 (119860119861119879) le

1

2120593 (119860119860

119879+ 119861119861119879) (16)

where119860119860119879 and 119861119861119879 are symmetric matrices It is also knownthat for arbitrary Hermitianmatrices119860119860119879 119861119861119879 isin 119877119898times119898 119896 =1 2 119898 the following holds (see Exercise II115 in [24])

119896

sum

119895=1

120590119895(119860119860119879+ 119861119861119879) le

119896

sum

119895=1

120590119895(119860119860119879) +

119896

sum

119895=1

120590119895(119861119861119879) (17)

Thus

120593 (119860119879119860 + 119861

119879119861) = 120593 (119860119860

119879+ 119861119861119879)

le 120593 (119860119860119879) + 120593 (119861119861

119879)

(18)

Together with (13) and (16) (18) yields

119892 (120572119860 + (1 minus 120572) 119861) le 120572120593 (119860119860119879) + (1 minus 120572) 120593 (119861119861

119879)

= 120572119892 (119860) + (1 minus 120572) 119892 (119861)

(19)

Hence 119892(119883) = sum119896119894=11205902

119894(119883) is a convex function

(2) Let Θ = diag(1205901 1205902 1205903 120590

119903) and 119903 is the rank of119883

119896 le 119903

119878 = (

Θ 119874

119874 119874

) (20)

and we have

Tr (119883119879119880119882119880119879119883) = Tr (119881119878119879119880119879119880119882119880119879119880119878119881119879)

= Tr (119881119878119879119882119878119881119879) = Tr (119878119879119882119878) (21)

Moreover

119878119879

119882119878 = (

Θ 119874

119874 119874)

119879

(

119868119896119874

119874 119874)(

Θ 119874

119874 119874) = (

Θ 119874

119874 119874

) (22)

where Θ = diag (12059021 1205902

2 1205902

3 120590

2

119896)

Thus Tr(119878119879119882119878) = sum119896119894=11205902

119894(119883) = 119892(119883) holds

According to above theorem (10) can be rewritten as

min 119891 (119883) +120588

21198832

119865minus120588

2Tr (119883119879119880119882119880119879119883)

st 119883 isin Γ cap Ω

(23)

Since the objective function of (23) is a difference of convexfunctions the constrained set Γ cap Ω is a closed convex con-straint so (23) isDCprogrammingNext wewill use stepwiselinear approximate method to solve it

22 Stepwise Linear Approximative Algorithm In this subsec-tion we use stepwise linear approximation method [37] tosolve reformulated model (23) Let 119865(119883) represent objectivefunction of model (23) that is

119865 (119883) = 119891 (119883) +120588

21198832

119865minus120588

2Tr (119883119879119880119882119880119879119883) (24)

and the linear approximation function of119865(119883) can be definedas

119867(119883119883119894) = 119891 (119883) +

120588

21198832

119865

minus120588

2119892 (119883

119894) + ⟨120597119892 (119883

119894) 119883 minus 119883

119894⟩

(25)

where (120597119892(119883)120597119883)(120597Tr(119883119879119880119882119880119879119883)120597119883) = 2119880119882119880119879119883 (see522 in [36])

Stepwise linear approximative algorithm for solvingmodel (23) can be described as follows

Given the initial matrix 1198830 isin Γ cap Ω penalty parameter120588 gt 0 rank 119896 gt 0 120591 gt 1 set 119894 = 0

Step 1 Compute the single value decomposition of 119883119894119880119894Σ119894119881119894= 119883119894

Step 2 Update

119883119894+1isin arg min119883isinΓcapΩ

119867 (119883119883119894) (26)

Step 3 If stopping criterion is satisfied stop else set 119894 larr 119894+1

and 120588 larr (1 + 120591)120588 and go to Step 1

Remark

(1) Subproblem (26) can be done by solving a series ofconvex subproblemsThe efficient method for convexprogramming is multiple and it has perfect theoryguarantees

Mathematical Problems in Engineering 5

(a) When Γ cap Ω is 119877119898times119899 closed-form solutions canbe obtained for example when solving model(1) 119883119894+1 = (119860

lowast119860 + 120588119868)

minus1(120588119880119894119882(119880119894)119879119883119894) are

obtained by the first-order optimality condi-tions

0 isin 120597119891 (119883) + 120588119883 minus 120588119880119894119882(119880

119894)119879

119883119894 (27)

(b) When Γ cap Ω is a general convex set the closed-form solution is hard to get but subproblem(26) can be solved by some convex optimizationsoftware programs (such as CVX and CPLEX)According to different closed convex constraintsets we can pick and choose suitable convexprogramming algorithm to combine with ourmethod The toolbox listed in this paper is justan alternative choice

(2) The stop rule that we adopted here is 119883119894+1 minus119883119894119865le 120576

with some 120576 small enough

23 Convergence Analysis In this subsection we analyzethe convergence properties of the sequence 119883119894 generatedby above algorithm Before we prove the main convergenceresult we give the following lemma which is analogous toLemma 1 in [10]

Lemma 2 For any 1198841and 119884

2isin 119877119898times119899

100381710038171003817100381710038171198841minus 1198842

10038171003817100381710038171003817119865le10038171003817100381710038171198841 minus 1198842

1003817100381710038171003817119865 (28)

where 119884119894is the pre-119896 singular values approximate of 119884

119894(119894 =

1 2)

Proof Without loss of generality we assume 119898 le 119899 and thesingular value decomposition 119884

1= 11988011198781119881119879

11198842= 11988021198782119881119879

2

with

1198781= (

diag (120590) 00 0

) isin 119877119898times119899

1198782= (

diag (120574) 00 0

) isin 119877119898times119899

(29)

where 120590 = (1205901 1205902 120590

119904) 1205901ge sdot sdot sdot ge 120590

119904gt 0 and 120574 =

(1205741 1205742 120574

119905) 1205741ge sdot sdot sdot ge 120574

119905gt 0 119884

119894can be written as

119884119894= 119880119894119882119878119894119881119879

119894 (119894 = 1 2) Note that 119880

1 1198811 1198802 1198812are

orthogonal matrices we have

10038171003817100381710038171198841 minus 11988421003817100381710038171003817

2

119865minus100381710038171003817100381710038171198841minus 1198842

10038171003817100381710038171003817

2

119865

= Tr ((1198841minus 1198842)119879

(1198841minus 1198842))

minus Tr ((1198841minus 1198842)119879

(1198841minus 1198842))

= Tr (11988411987911198841minus 1198841

119879

1198841+ 119884119879

21198842minus 1198842

119879

1198842)

minus 2Tr (11988411987911198842minus 1198841

119879

1198842)

=

119904

sum

119894=1

1205902

119894minus

119896

sum

119894=1

1205902

119894+

119905

sum

119894=1

1205742

119894minus

119896

sum

119894=1

1205742

119894

minus 2Tr (11988411987911198842minus 1198841

119879

1198842)

(30)

And we note that

Tr (11988411987911198842minus 1198841

119879

1198842) = Tr ((119884

1minus 1198841)119879

(1198842minus 1198842)

+ (1198841minus 1198841)119879

1198842+ 1198841

119879

(1198842minus 1198842))

= Tr (1198811(1198781minus1198821198781)119879

119880119879

11198802(1198782minus1198821198782) 119881119879

2

+ 1198811(1198781minus1198821198781)119879

119880119879

111988021198821198782119881119879

2

+ 1198811(119882119878)119879119880119879

11198802(1198782minus1198821198782) 119881119879

2)

= Tr ((1198781minus1198821198781)119879

119880 (1198782minus1198821198782) 119881119879

+ (1198781minus1198821198781)119879

1198801198821198782119881119879

+ (1198821198781)119879

119880 (1198782minus1198821198782) 119881119879)

(31)

where 119880 = 11988011987911198802 119881 = 119881

119879

11198812are also orthogonal matrices

Next we estimate an upper bound for Tr(11988411987911198842minus 1198841

119879

1198842)

According to Theorem 749 in [38] we get that an orthog-onal matrix 119880 is a maximizing matrix for the problemmaxTr(119860119880)119880 is orthogonal if and only if 119860119880 is positivesemidefinite matrix Thus Tr((119878

1minus 119882119878

1)119879119880(1198782minus 119882119878

2)119881119879)

Tr((1198781minus 119882119878

1)1198791198801198821198782119881119879) and Tr((119882119878)119879119880(119878

2minus 119882119878

2)119881119879)

achieve their maximum if and only if (1198781minus 119882119878

1)119879119880(1198782minus

1198821198782)119881119879 (1198781minus 1198821198781)1198791198801198821198782119881119879 and (119882119878

1)119879119880(1198782minus 1198821198782)119881119879

are all positive semidefinite It is known that when 119860119861 ispositive semidefinite

Tr (119860119861) = sum119894

120590119894(119860119861) le sum

119894

120590119894(119860) 120590119894(119861) (32)

Applying (32) to above three terms we have

Tr ((1198781minus1198821198781)119879

119880 (1198782minus1198821198782) 119881119879)

le sum

119894

120590119894(1198781minus1198821198781) 120590119894(1198782minus1198821198782)

Tr ((1198781minus1198821198781)119879

1198801198821198782119881119879)

le sum

119894

120590119894(1198781minus1198821198781) 120590119894(1198821198782)

Tr ((1198821198781)119879

119880(1198782minus1198821198782) 119881119879)

le sum

119894

120590119894(1198821198781) 120590119894(1198782minus1198821198782)

(33)

6 Mathematical Problems in Engineering

Hence without loss of generality assuming 119896 le 119904 le 119905 we have

10038171003817100381710038171198841 minus 11988421003817100381710038171003817

2

119865minus100381710038171003817100381710038171198841minus 1198842

10038171003817100381710038171003817

2

119865

ge

119904

sum

119894=1

1205902

119894minus

119896

sum

119894=1

1205902

119894+

119905

sum

119894=1

1205742

119894minus

119896

sum

119894=1

1205742

119894minus 2(

119904

sum

119894=119896+1

120590119894120574119894)

=

119904

sum

119894=119896+1

(120590119894minus 120574119894)2

+

119905

sum

119894=119904+1

1205742

119894ge 0

(34)

which implies that 1198841minus 1198842119865le 1198841minus 1198842119865holds

Next we claim the value of iterations of stepwise linearapproximative algorithm is nonincreasing

Theorem 3 Let 119883119894 be the iterative sequence generated bystepwise linear approximative algorithmThen 119865(119883119894) is mon-otonically nonincreasing

Proof Since119883119894+1 isin argmin119867(119883119883119894) | 119883 isin Γ cap Ω we have

119867(119883119894+1 119883119894) le 119867(119883

119894 119883119894)

= 119891 (119883119894) +

120588

2

1003817100381710038171003817100381711988311989410038171003817100381710038171003817

2

119865minus120588

2119892 (119883119894)

= 119865 (119883119894)

(35)

and 120597119892(119883) is the subdifferential of 119892(119883) which implies

forall119883119884 119892 (119883) ge 119892 (119884) + ⟨120597119892 (119884) 119883 minus 119884⟩ (36)

Let119883 = 119883119894+1 119884 = 119883119894 in above inequality we have

119892 (119883119894+1) ge 119892 (119883

119894) + ⟨120597119892 (119883

119894) 119883119894+1minus 119883119894⟩ (37)

Therefore

119867(119883119894+1 119883119894) = 119891 (119883

119894+1) +

120588

2

10038171003817100381710038171003817119883119894+110038171003817100381710038171003817

2

119865

minus120588

2119892 (119883

119894) + ⟨120597119892 (119883

119894) 119883 minus 119883

119894⟩

ge 119891 (119883119894+1) +

120588

2

10038171003817100381710038171003817119883119894+110038171003817100381710038171003817

2

119865minus120588

2119892 (119883119894+1)

= 119865 (119883119894+1)

(38)

Consequently

119865 (119883119894+1) le 119867(119883

119894+1 119883119894) le 119867(119883

119894 119883119894) = 119865 (119883

119894) (39)

We immediately obtain the conclusion as desired

Next we show the convergence of iterative sequencegenerated by stepwise linear approximative algorithm whensolving (23)Theorem 4 shows the convergence when solving(23) with Γ cap Ω = 119877119898times119899

Theorem 4 Let Υ be the set of stable points of problem (23)with no constraints then the iterative sequence 119883119894 generatedby linear approximative algorithm converges to some119883119900119901119905 isin Υwhen 120588 rarr infin

Proof Assuming that 119883optisin Υ is any optimal solution of

problem (23) with Γ cap Ω = 119877119898times119899 according to the first-orderoptimality conditions we have

0 = 120597119891 (119883opt) + 120588119883

optminus 120588119880

opt119882(119880

opt)119879

119883opt (40)

Since 119883119894+1 isin argmin119891(119883) + (1205882)1198832119865minus (1205882)119892(119883

119894) +

⟨120597119892(119883119894) 119883 minus 119883

119894⟩ then

0 = 120597119891 (119883119894+1) + 120588119883

119894+1minus 120588119880119894119882(119880

119894)119879

119883119894 (41)

Equation (41) subtracts (40)

119883119894+1minus 119883

opt=1

120588(120597119891 (119883

119894+1) minus 120597119891 (119883

opt))

+ 119880opt119882(119880

opt)119879

119883opt

minus 119880119894119882(119880

119894)119879

119883119894

(42)

in which 119880opt119882(119880

opt)119879119883

optminus 119880

119894119882(119880119894)119879119883119894

=

119880opt119882sum

opt(119881

opt)119879minus 119880119894119882sum119894(119881

opt)119879

By Lemma 2 we have

100381710038171003817100381710038171003817119880

opt119882(119880

opt)119879

119883optminus 119880119894119882(119880

119894)119879

119883119894100381710038171003817100381710038171003817119865

=

1003817100381710038171003817100381710038171003817100381710038171003817

119880opt119882

opt

sum (119881opt)119879

minus 119880119894119882

119894

sum (119881119894)119879

1003817100381710038171003817100381710038171003817100381710038171003817119865

le10038171003817100381710038171003817119883119894minus 119883

opt10038171003817100381710038171003817119865

(43)

Norming both sides of (42) by the triangle inequalities prop-erty of norms we have

10038171003817100381710038171003817119883119894+1minus 119883

opt10038171003817100381710038171003817119865= lim120588rarrinfin

10038171003817100381710038171003817119883119894+1minus 119883

opt10038171003817100381710038171003817119865

le lim120588rarrinfin

(1

120588

10038171003817100381710038171003817120597119891 (119883

119894+1) minus 120597119891 (119883

opt)10038171003817100381710038171003817119865

+10038171003817100381710038171003817119883119894minus 119883

opt10038171003817100381710038171003817119865) = lim120588rarrinfin

1

120588

10038171003817100381710038171003817120597119891 (119883

119894+1)

minus 120597119891 (119883opt)10038171003817100381710038171003817119865+ lim120588rarrinfin

10038171003817100381710038171003817119883119894minus 119883

opt10038171003817100381710038171003817119865=10038171003817100381710038171003817119883119894

minus 119883opt10038171003817100381710038171003817119865

(44)

which implies that the sequence 119883119894 minus 119883opt119865 is monoton-

ically nonincreasing when 120588 rarr infin We then observe that119883119894minus 119883

opt119865 is bounded and hence lim

119894rarrinfin119883119894minus 119883

opt119865

exists denoted by 119883 minus 119883opt119865 where 119883 is the limited point

of 119883119894 Let 119894 rarr infin in (41) by the continuity we get

Mathematical Problems in Engineering 7

0 = 120597119891(119883) + 120588119883 minus 120588119880119882119880119879

119883 which implies that 119883 isin ΥBy setting119883 = 119883opt we have

lim119894rarrinfin

10038171003817100381710038171003817119883119894minus 119883

opt10038171003817100381710038171003817119865 =10038171003817100381710038171003817119883 minus 119883

opt10038171003817100381710038171003817119865 = 0 (45)

which completes the proof

Now we are ready to give the convergence of iterativesequence when solving (23) with general closed convex ΓcapΩ

Theorem 5 Let 120585 be the set of optimal solutions of problem(23) then the iterative sequence 119884119894 generated by stepwiselinear approximative algorithm converges to some 119884119900119901119905 isin 120585when 120588 rarr infin

Proof Generally speaking subproblem (26) can be solved byprojecting a solution obtained via solving a problem withno constraints onto a closed convex set that is 119884119894+1 =

119875ΓcapΩ(119883119894+1) where 119883119894+1 isin argmin119867(119883119883119894) According to

Theorem 4 the sequence 119883119894 minus 119883opt119865 is monotonically

nonincreasing and convergent when 120588 rarr infin together withthewell known fact that119875 is a nonexpansive operator we have

10038171003817100381710038171003817119884119894minus 119884

opt10038171003817100381710038171003817119865 =10038171003817100381710038171003817119875ΓcapΩ

(119883119894) minus 119875ΓcapΩ

(119883opt)10038171003817100381710038171003817119865

le10038171003817100381710038171003817119883119894minus 119883

opt10038171003817100381710038171003817119865 (46)

where119884opt is the projection of119883opt onto ΓcapΩ Clearly119884optisin

120585Hence 0 le lim

119894rarrinfin119884119894minus 119884

opt119865le lim119894rarrinfin

119883119894minus 119883

opt119865=

0 which implies that 119884119894 converge to 119884opt We immediatelyobtain the conclusion

3 Numerical Results

In this section we demonstrate the performance of ourmethod proposed in Section 2 by applying it to solve affinerank minimization problems (image reconstruction andmatrix completion) andmax-cut problemsThe codes imple-mented in this section are written in MATLAB and allexperiments are performed in MATLAB R2013a runningWindows 7

31 Image Reconstruction Problems In this subsection weapply our method to solve one case of affine rank minimiza-tion problems It can be formulated as

min 1

2A (119883) minus 119887

2

119865

st rank (119883) le 119896

119883 isin 119877119898times119899

(47)

where 119883 isin 119877119898times119899 is the decision variable and the linear mapA 119877

119898times119899rarr 119877119901 and vector 119887 isin 119877119901 are known Clearly (47)

is a special case of problem (3) Hence our method can besuitably applied to (47)

Recht et al [39] have demonstrated that when we samplelinearmaps froma class of probability distributions then they

0

50

100

150

200

250

300

SNR

(dB)

800 1000 1200 1400 1600 1800 2000 2200 2400600The number of linear constraints

GaussBinary

Sparse BinaryRandom Projection

Figure 1 Performance curves under four different measurementmatrices SNR versus the number of linear constraints

would obey the Restricted Isometry Property There are twoingredients for a random linear map to be nearly isometricFirst it must be isometric in expectation Second the prob-ability of large distortions of length must be exponentiallysmall In our numerical experiments we use four nearlyisometric measurements The first one is the ensemble withindependent identically distributed (iid) Gaussian entriesThe second one has entries sampled from an iid symmetricBernoulli distribution which we call BernoulliThe third onehas zeros in two-thirds of the entries and others sampledfrom an iid symmetric Bernoulli distribution which wecall Sparse Bernoulli The last one has an orthonormal basiswhich we call Random Projection

Now we conduct numerical experiments to test the per-formance of our method for solving (47) To illustrate thescaling of low rank recovery for a particular matrix 119883consider the MIT logo presented in [39] The image has 46rows and 81 columns (the total number of pixels is 46 times81 = 3726) Since the logo only has 5 distinct rows it hasrank 5 We sample it using Gaussian iid Bernoulli iidSparse Bernoulli iid and Random Projection measure-ments matrix with the number of linear constraints rangingbetween 700 and 2400 We declared MIT logo 119883 to berecovered if 119883 minus 119883opt

119865119883119865le 10minus3 that is SNR ge60 dB

(SNR = minus20 log(119883 minus 119883opt119865119883119865)) Figures 1 and 2 show

the results of our numerical experimentsFigure 1 plots the curves of SNR under four different

measurement matrices mentioned above with different num-bers of linear constraints The numerical values on the 119909-axisrepresent the number of linear constraints 119901 that is numberof measurements while those on 119910-axis represent the Signalto Noise Ration between the true original 119883 and 119883opt recov-ered by our method Noticeably Gauss and Sparse Binarycan successfully reconstruct MIT logo images from about1000 constraints while Random Projection requires about

8 Mathematical Problems in Engineering

800 900 1000 1100 1200 1300 1400 1500700The number of linear constraints

0

20

40

60

80

100

120

140SN

R (d

B)

GaussBinary

Sparse BinaryRandom Projection

Figure 2 Performance curves of nuclear norm heuristic methodunder four different measurementmatrices SNR versus the numberof linear constraints

1200 constraints Binary requires the minimal constraintsabout 950TheGauss and Sparse Binary measurements reachtheir maximal SNR of 220 dB at around 1500 constraintsthe Binary measurements can reach its maximal SNR of180 dB at around 1400 constraints Random Projection hasmore large phase transition point but has the best recoveryerror We observe a sharp transition to perfect recovery near1000 measurements lower than 2119896(119898 + 119899 minus 119896) Comparedwith Figure 2 which shows results of nuclear norm heuristicmethod in [39] and the method that is a kind of convexrelaxations there is a sharp transition to perfect recoverynear 1200 measurements which is approximately equal to2119896(119898 + 119899 minus 119896) The comparison shows our method hasbetter performance that it can recover an image with lessconstraints namely it needs less samples In addition weobserve that SNR of nuclear norm heuristic method achieves120 dB when the number of measurements approximatelyequals 1500 while ourmethod just needs 1200measurementswhich decrease by 20 compared with 1500 If the numberof measurements is 1500 accuracy of our method wouldbe higher Among precisions of four different measurementmatrixes the smallest one is precision of Random ProjectionHowever the precision value is also higher than 120 dB Theperformance illustrates that our method is more accurateObserving the curve of between 1200 and 1500 we can findthat our curve rises smoothly compared with the curve in thesame interval in Figure 2 this shows that ourmethod is stablefor reconstructing image

Figure 3 shows recovered images under four differentmeasurement matrices with 800 900 and 1000 linear con-straints And Figure 4 shows recovered images by ourmethodand that by SeDuMi We chose to use this interior-pointmethod because it yields the highest accuracy Distinctly therecovered images by SeDuMi have some shadow while oursis clear Moreover as far as time is concerned our methodis more fast and the computational time is not extremely

Table 1 DC versus SeDuMi in time (second)

119901 800 900 1000 1100 1200 1500DC 4682 5049 5078 5138 5196 5409SeDuMi 9326 12016 13824 15944 16176 24538

sensitive to119901 compared to SeDuMiThe reason is that closed-form solutions can be obtained when we solve this problemComparisons are shown in Table 1

32 Matrix Completion Problems In this subsection weapply our method to matrix completion problems It can bereformulated as

min 1

2

1003817100381710038171003817119875Ω (119883) minus 119875Ω (119872)1003817100381710038171003817

2

119865

st rank (119883) le 119896

119883 isin 119877119898times119899

(48)

where 119883 and 119872 are both 119898 times 119899 matrices and Ω is a subsetof index pairs (119894 119895) Up to now numerous methods wereproposed to solve the matrix completion problem (see eg[9 10 29 31]) It is easy to see that problem (48) is a specialcase of general rank minimization problem (3) with 119891(119883) =(12)119875

Ω(119883) minus 119875

Ω(119872)2

119865

In our experiments we first created random matrices119872119871isin 119877119898times119896 and 119872

119877isin 119877119898times119896 with iid Gaussian entries

and then set 119872 = 119872119871119872119879

119877 We sampled a subset Ω of 119901

entries uniformly at random We compare against singularvalue thresholding (SVT) method [9] FPC FPCA [10] andthe OptSpace (OPT) method [29] using

1003817100381710038171003817119875Ω (119883) minus 119875Ω (119872)1003817100381710038171003817119865

1003817100381710038171003817119875Ω (119872)1003817100381710038171003817119865

le 10minus3 (49)

as a stop criterion The maximum number of iterations is nomore than 1500 in all methods other parameters are defaultWe report result averaged over 10 runs

We use SR = 119901(119898119899) to denote sampling ratio andFR = 119896(119898 + 119899 minus 119896)119901 to represent the dimension of the setof rank 119896 matrices divided by the number of measurementsNS and AT denote number of matrices that are recoveredsuccessfully and average times (seconds) that are successfullysolved respectively RE represents the average relative error ofthe successful recovered matrices We use the relative error

1003817100381710038171003817119872 minus 119883lowast1003817100381710038171003817119865

119872119865 (50)

to estimate the closeness of119883lowast to119872 where119883lowast is the optimalsolution to (48) produced by our method We declared119872 tobe recovered if the relative error was less than 10minus3

Table 2 shows that our method has the close performancewith OptSpace in column of NS These performances arebetter than those of SVT FPC and FPCA However SVTFPC and FPCA cannot obtain the optimal matrix within1500 iterationswhich shows that ourmethodhas advantage infinding the neighborhood of optimal solutionThe results donot mean that these algorithms are invalid but themaximum

Mathematical Problems in Engineering 9

Table 2 Numerical results for problems with119898 = 119899 = 40 and SR = 05

Alg 119896 FR NS AT RE 119896 FR NS AT RADC

1 00988

10 002 123119864 minus 04

6 05550

9 012 464119864 minus 04

SVT 10 011 159119864 minus 04 0 mdash mdashFPC 9 013 652119864 minus 04 0 mdash mdashFPCA 10 003 398119864 minus 05 0 mdash mdashOptSpace 10 001 131119864 minus 04 10 021 405119864 minus 04

DC

2 01950

10 003 150119864 minus 04

7 06388

9 017 611119864 minus 04

SVT 9 018 183119864 minus 04 0 mdash mdashFPC 8 014 652119864 minus 04 0 mdash mdashFPCA 10 002 583119864 minus 05 0 mdash mdashOptSpace 10 001 151119864 minus 04 9 037 537119864 minus 04

DC

3 02888

10 005 268119864 minus 04

8 07200

7 028 790119864 minus 04

SVT 2 036 247119864 minus 04 0 mdash mdashFPC 5 025 858119864 minus 04 0 mdash mdashFPCA 10 006 138119864 minus 04 0 mdash mdashOptSpace 10 004 175119864 minus 04 9 082 668119864 minus 04

DC

4 03800

10 006 284119864 minus 04

9 07987

4 040 893119864 minus 04

SVT 1 062 138119864 minus 04 0 mdash mdashFPC 1 025 872119864 minus 04 0 mdash mdashFPCA 0 mdash mdash 0 mdash mdashOptSpace 10 005 268119864 minus 04 4 148 928119864 minus 04

DC

5 04688

10 008 382119864 minus 04

SVT 0 mdash mdashFPC 0 mdash mdashFPCA 0 mdash mdashOptSpace 10 009 316119864 minus 04

Table 3 Numerical results for problems with119898 = 119899 = 100 and SR = 02

Alg 119896 FR NS AT RE 119896 FR NS AT RADC

1 00995

10 030 191119864 minus 04

5 04875

10 150 505119864 minus 04

SVT 7 182 204119864 minus 04 0 mdash mdashFPC 10 314 806119864 minus 04 0 mdash mdashFPCA 10 012 173119864 minus 04 0 mdash mdashOptSpace 10 003 171119864 minus 04 10 058 557119864 minus 04

DC

2 01980

10 077 334119864 minus 04

6 05820

6 282 745119864 minus 04

SVT 0 mdash mdash 0 mdash mdashFPC 0 mdash mdash 0 mdash mdashFPCA 8 021 297119864 minus 04 0 mdash mdashOptSpace 10 005 227119864 minus 04 9 097 657119864 minus 04

DC

3 02955

10 064 303119864 minus 04

7 06755

5 324 766119864 minus 04

SVT 0 mdash mdash 0 mdash mdashFPC 0 mdash mdash 0 mdash mdashFPCA 6 009 220119864 minus 04 0 mdash mdashOptSpace 10 012 376119864 minus 04 5 189 795119864 minus 04

DC

4 03920

9 116 440119864 minus 04

SVT 0 mdash mdashFPC 0 mdash mdashFPCA 0 mdash mdashOptSpace 10 022 385119864 minus 04

10 Mathematical Problems in Engineering

Figure 3 Recovered images under four measurement matrices with different numbers of linear constraints From left to right the originalresults under Gauss Binary Sparse Binary and Random Projection measurement From top to bottom 119901 = 800 119901 = 900 and 119901 = 1000

Table 4 Numerical results for problems with119898 = 119899 = 100 and SR = 03

Alg 119896 FR NS AT RE 119896 FR NS AT RADC

1 00663

10 019 149E minus 04

7 04530

10 061 364E minus 04SVT 10 068 166E minus 04 0 mdash mdashFPC 10 133 447E minus 04 0 mdash mdashFPCA 10 017 612E minus 05 0 mdash mdashOptSpace 10 005 129E minus 04 10 047 339E minus 04DC

2 01320

10 020 169E minus 04

8 05120

10 087 470E minus 04SVT 8 139 166E minus 04 0 mdash mdashFPC 10 218 484E minus 04 0 mdash mdashFPCA 10 016 768E minus 05 0 mdash mdashOptSpace 10 004 148E minus 04 10 063 367E minus 04DC

3 01970

10 025 192E minus 04

9 05730

9 103 485E minus 04SVT 4 177 123E minus 04 0 mdash mdashFPC 9 432 582E minus 04 0 mdash mdashFPCA 9 019 121E minus 04 0 mdash mdashOptSpace 10 007 198E minus 04 10 140 487E minus 04DC

4 02613

10 032 230E minus 04

10 06333

8 136 621E minus 04SVT 4 982 467E minus 04 0 mdash mdashFPC 7 502 659E minus 04 0 mdash mdashFPCA 10 018 663E minus 04 0 mdash mdashOptSpace 10 009 208E minus 04 10 180 597E minus 04DC

5 03250

10 037 272E minus 04

11 06930

9 219 729E minus 04SVT 0 mdash mdash 0 mdash mdashFPC 1 818 734E minus 04 0 mdash mdashFPCA 10 010 220E minus 04 9 mdash mdashOptSpace 10 019 267E minus 04 5 411 705E minus 04DC

6 03880

10 047 300E minus 04

12 07520

6 283 879E minus 04SVT 0 mdash mdash 0 mdash mdashFPC 0 mdash mdash 0 mdash mdashFPCA 0 009 220E minus 04 0 mdash mdashOptSpace 10 025 325E minus 04 6 644 847E minus 04

number of iterations is limited to 1500 When the rank ofmatrix is large (ie 119896 ge 5) the average time of our algorithmis less than that of OptSpace In some cases the relative errorsof DC are minimal When the dimension of matrix becomeslargerDCmethodhas a better performance ofNS and similarrelative errors with that of OptSpace In terms of time ourtime is longer than that of OptSpace but shorter than thatof FPC and SVT Tables 3 and 4 show the above results In

conclusion our method has better performance in ldquohardrdquoproblems that is FR gt 034 The problems involved matricesthat are not of very low rank according to [10]

Table 5 gives the numerical results for problemswith largedimensions In these numerical experiments the maximumnumber of iterations is chosen by default of each comparedmethod We observe that our method can recover problemssuccessfully as other four methods while our method is

Mathematical Problems in Engineering 11

Figure 4 Recovered images under different numbers of linear con-straints From left to right the original results under our methodand results by SeDuMi From top to bottom 119901 = 800 119901 = 900 and119901 = 1000

Table 5 Numerical results for problems with different dimension

119898(119899) 119896 119901 SR FR Alg NS AT RA

100 10 5666 057 034

DC 10 014 191E minus 04SVT 10 281 185E minus 04FPC 10 523 183E minus 04FPCA 10 028 494E minus 05

OptSpace 10 09 141E minus 04

200 10 15665 039 025

DC 10 080 165E minus 04SVT 10 293 253E minus 04FPC 10 418 273E minus 04FPCA 10 008 688E minus 05

OptSpace 10 041 172E minus 04

300 10 27160 030 022

DC 10 272 167E minus 04SVT 10 257 179E minus 04FPC 10 98 163E minus 04FPCA 10 039 671E minus 05

OptSpace 10 185 150E minus 04

400 10 41269 026 019

DC 10 573 156E minus 04SVT 10 278 166E minus 04FPC 10 1328 130E minus 04FPCA 10 073 665E minus 05

OptSpace 10 347 146E minus 04

500 10 49471 020 02

DC 10 1363 173E minus 04SVT 10 294 174E minus 04FPC 10 2575 144E minus 04FPCA 10 112 107E minus 04

OptSpace 10 505 160E minus 04

1000 10 119406 012 017

DC 10 35641 170E minus 04SVT 10 456 166E minus 04FPC 10 11053 129E minus 04FPCA 10 516 164E minus 04

OptSpace 10 1461 150E minus 04

slower with the increase of dimension However our methodis faster than FPC Though SVT FPCA and OptSpace out-perform our method in terms of speed they are specifically

developedmethod formatrix completion problemswhile ourmethod can be applied to more class of problems To provethis we will apply our method to max-cut problems

33 Max-Cut Problems In this subsection we apply ourmethod tomax-cut problems It can be formulated as follows

min Tr (119871119883)st diag (119883) = 119890

rank (119883) = 1

119883 ⪰ 0

(51)

where119883 = 119909119909119879 119909 = (1199091 119909

119899)119879 119871 = (14)(diag(119882119890) minus119882)

119890 = (1 1 1)119879 and119882 is the weightmatrix Equation (51) is

one general case of problem (3) with Γ cap Ω = 119883 | diag(119883) =119890119883 isin 119878

+ Hence our method can be suitably applied to (51)

Wenowaddress the initialization and termination criteriafor our method when applied to (51) We set 1198830 = 119890119890119879 inaddition we choose the initial penalty parameter 120588 to be 001and set the parameter 120591 to be 11 And we use 119883119894+1minus119883119894

119865le 120576

as the termination criteria with 120598 = 10minus5In our numerical experiments we first test several small

scale max-cut problems then we perform our method onthe standard ldquo119866rdquo set problems written by Professor Ginaldi[40] In all we consider 10 problems ranging in size from800 variables to 1000 variables The results are shown inTables 6 and 7 in which ldquo119898rdquo and ldquo119896rdquo represent correspondingdimension of graph and the rank of solution respectivelyTheldquoobvrdquo denotes objective values of optimal matrix solutionsoperated by our method

In Table 6 ldquoKVrdquo denotes known optimal objective val-ues and ldquoUBrdquo presents upper bounds for max-cut by thesemidefinite relaxation It is easy to observe that our methodis effective for solving these problems and it has computed theglobal optimal solutions of all problems Moreover we obtainlower rank solution compared with semidefinite relaxationmethod

Table 7 shows the numerical results for standard ldquo119866rdquo setproblems ldquoSUBrdquo denotes upper bounds by spectral bundlemethod proposed by Helmberg and Rendl [41] ldquoGW-cutrdquodenotes the approximative algorithm proposed by Goemansand Williamson [42] ldquoDCrdquo denotes results generated byour method DC+ is the results of DC with neighborhoodapproximation algorithm and the right most column titledBK gives best results as we have known according to [43 44]We use ldquomdashrdquo to indicate that we do not obtain the rank FromTable 3 we can note that our optimal values are close tothe results of latest researches better than values given byGW-cut and they are all less than the corresponding upperbounds For the problems we all get the lower rank solutioncompared with GW-cut moreover most solutions satisfy theconstraints rank = 1 The results prove that our algorithm iseffective when used to solve max-cut problems

4 Concluding Remarks

In this paper we used closed-form of penalty function toreformulate convex constrained rankminimization problems

12 Mathematical Problems in Engineering

Table 6 Numerical results for famous max-cut problems

Graph 119898 KV GW-cut DC Optimal solutionsUB 119896 obv 119896

C5 5 4 4525 5 4 1 (minus1 1 minus1 1 minus1)K5 5 6 625 5 6 1 (minus1 1 minus1 1 minus1)AG 5 928 96040 5 928 1 (minus1 1 1 minus1 minus1)BG 12 88 903919 12 88 1 (minus1 1 1 minus1 minus1 minus1 minus1 minus1 minus1 1 1 1)

Table 7 Numerical results for G problems

Graph 119898 SUB GW-cut DC DC+ BKobv 119896 obv 119896

G01 800 12078 10612 14 11281 1 11518 11624G02 800 12084 10617 31 11261 1 11550 11620G03 800 12077 10610 14 11275 1 11540 11622G11 800 627 551 30 494 2 528 564G14 800 3187 2800 13 2806 1 2980 3063G15 800 3169 2785 70 2767 1 2969 3050G16 800 3172 2782 14 2800 1 2974 3052G43 1000 7027 6174 mdash 6321 1 6519 6660G45 1000 7020 6168 mdash 6348 1 6564 6654G52 1000 4009 3520 mdash 3383 1 3713 3849

as DC programming which could be solved by stepwiselinear approximative method Under some suitable assump-tions we showed that accumulation point of the sequencegenerated by the stepwise linear approximative method was astable point The numerical results on affine rank minimiza-tion and max-cut problems demonstrate that our method isgenerally comparable or superior to some existing methodsin terms of solution quality and it can be applied to a classof problems However the speed of our method needs to beimproved

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

The authors need to acknowledge the fund projectsWanpingYang and Jinkai Zhao are supported by Key Projects of Phi-losophy and Social Sciences Research (Ministry of EducationChina 15JZD012) Soft Science Fund of Shaanxi Province ofChina (2015KRM114) and the Fundamental Research Fundsfor the Central Universities Fengmin Xu is supported bythe Fundamental Research Funds for the Central Universitiesand China NSFC Projects (no 11101325)

References

[1] K-C Toh and S Yun ldquoAn accelerated proximal gradient algo-rithm for nuclear norm regularized linear least squares prob-lemsrdquo Pacific Journal of Optimization vol 6 no 3 pp 615ndash6402010

[2] Y Amit M Fink N Srebro and S Ullman ldquoUncovering sharedstructures in multiclass classificationrdquo in Proceedings of the 24thInternational Conference on Machine Learning (ICML rsquo07) pp17ndash24 ACM Corvallis Ore USA June 2007

[3] L El Ghaoui and P Gahinet ldquoRank minimization under LMIconstraints a framework for output feedback problemsrdquo in Pro-ceedings of the European Control Conference (ECC rsquo93) GahinetP Rank minimization under LMI constraints Groningen TheNetherlands June-July 1993

[4] M Fazel HHindi and S P Boyd ldquoA rankminimization heuris-tic with application to minimum order system approximationrdquoin Proceedings of the IEEE American Control Conference (ACCrsquo01) vol 6 pp 4734ndash4739 Arlington Va USA June 2001

[5] M W Trosset ldquoDistance matrix completion by numericaloptimizationrdquo Computational Optimization and Applicationsvol 17 no 1 pp 11ndash22 2000

[6] B Recht M Fazel and P A Parrilo ldquoGuaranteed minimum-rank solutions of linear matrix equations via nuclear normminimizationrdquo SIAM Review vol 52 no 3 pp 471ndash501 2010

[7] L Wu ldquoFast at-the-money calibration of the libor marketmodel through lagrange multipliersrdquo Journal of ComputationalFinance vol 6 no 2 pp 39ndash77 2003

[8] Z Zhang and L Wu ldquoOptimal low-rank approximation to acorrelationmatrixrdquo Linear Algebra and its Applications vol 364pp 161ndash187 2003

[9] J F Cai E J Candes andZ Shen ldquoA singular value thresholdingalgorithm for matrix completionrdquo SIAM Journal on Optimiza-tion vol 20 no 4 pp 1956ndash1982 2010

[10] S Ma D Goldfarb and L Chen ldquoFixed point and Bregmaniterative methods for matrix rank minimizationrdquoMathematicalProgramming vol 128 no 1-2 pp 321ndash353 2011

[11] K Barman and O Dabeer ldquoAnalysis of a collaborative filterbased on popularity amongst neighborsrdquo IEEE Transactions onInformation Theory vol 58 no 12 pp 7110ndash7134 2012

[12] S T Aditya O Dabeer and B K Dey ldquoA channel codingperspective of collaborative filteringrdquo IEEE Transactions onInformation Theory vol 57 no 4 pp 2327ndash2341 2011

[13] N E Zubov EAMikrinM SMisrikhanovVN RyabchenkoS N Timakov and E A Cheremnykh ldquoIdentification of theposition of an equilibrium attitude of the international spacestation as a problem of stable matrix completionrdquo Journal ofComputer and Systems Sciences International vol 51 no 2 pp291ndash305 2012

[14] T M Guerra and L Vermeiren ldquoLMI-based relaxed non-quadratic stabilization conditions for nonlinear systems in theTakagi-Sugenorsquos formrdquo Automatica vol 40 no 5 pp 823ndash8292004

[15] J J Meng W Yin H Li E Hossain and Z Han ldquoCollaborativespectrum sensing from sparse observations in cognitive radionetworksrdquo IEEE Journal on Selected Areas in Communicationsvol 29 no 2 pp 327ndash337 2011

Mathematical Problems in Engineering 13

[16] Z Q Dong S Anand and R Chandramouli ldquoEstimation ofmissing RTTs in computer networks matrix completion vscompressed sensingrdquo Computer Networks vol 55 no 15 pp3364ndash3375 2011

[17] JWright A Y Yang A Ganesh S S Sastry and YMa ldquoRobustface recognition via sparse representationrdquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 31 no 2 pp210ndash227 2009

[18] H Ji C Liu Z Shen and Y Xu ldquoRobust video denoisingusing Low rank matrix completionrdquo in Proceedings of the IEEEComputer Society Conference on Computer Vision and PatternRecognition (CVPR rsquo10) pp 1791ndash1798 San Francisco CalifUSA June 2010

[19] W U Bajwa J Haupt A M Sayeed and R Nowak ldquoCom-pressed channel sensing a new approach to estimating sparsemultipath channelsrdquo Proceedings of the IEEE vol 98 no 6 pp1058ndash1076 2010

[20] G Taubock F Hlawatsch D Eiwen and H Rauhut ldquoCom-pressive estimation of doubly selective channels in multicarriersystems leakage effects and sparsity-enhancing processingrdquoIEEE Journal on Selected Topics in Signal Processing vol 4 no2 pp 255ndash271 2010

[21] Y Tachwali W J Barnes F Basma and H H Refai ldquoThefeasibility of a fast fourier sampling technique for wirelessmicrophone detection in IEEE 80222 air interfacerdquo in Pro-ceedings of the IEEE Conference on Computer CommunicationsWorkshops (INFOCOM rsquo02) pp 1ndash5 IEEE San Diego CalifUSA March 2010

[22] J Meng W Yin H Li E Houssain and Z Han ldquoCollabora-tive spectrum sensing from sparse observations using matrixcompletion for cognitive radio networksrdquo in Proceedings of theIEEE International Conference on Acoustics Speech and SignalProcessing (ICASSP rsquo10) pp 3114ndash3117 Dallas Tex USA March2010

[23] S Pudlewski and T Melodia ldquoOn the performance of compres-sive video streaming for wireless multimedia sensor networksrdquoin Proceedings of the IEEE International Conference on Commu-nications (ICC rsquo10) pp 1ndash5 IEEECapeTown SouthAfricaMay2010

[24] A H Mohseni M Babaie-Zadeh and C Jutten ldquoInflatingcompressed samples a joint source-channel coding approachfor noise-resistant compressed sensingrdquo in Proceedings of theIEEE International Conference on Acoustics Speech and SignalProcessing (ICASSP rsquo09) pp 2957ndash2960 IEEE Taipei TaiwanApril 2009

[25] S J Benson Y Ye and X Zhang ldquoSolving large-scale sparsesemidefinite programs for combinatorial optimizationrdquo SIAMJournal on Optimization vol 10 no 2 pp 443ndash461 2000

[26] Z Lu Y Zhang and X Li ldquoPenalty decomposition methods forrank minimizationrdquo Optimization Methods and Software vol30 no 3 pp 531ndash558 2015

[27] P Jain R Meka and I S Dhillon ldquoGuaranteed rank mini-mization via singular value projectionrdquo in Advances in NeuralInformation Processing Systems pp 937ndash945 2010

[28] J P Haldar and D Hernando ldquoRank-constrained solutions tolinear matrix equations using powerfactorizationrdquo IEEE SignalProcessing Letters vol 16 no 7 pp 584ndash587 2009

[29] R H Keshavan A Montanari and S Oh ldquoMatrix completionfrom a few entriesrdquo IEEE Transactions on Information Theoryvol 56 no 6 pp 2980ndash2998 2010

[30] YNesterov ANemirovskii andY Ye Interior-Point PolynomialAlgorithms in Convex Programming Society for Industrial andApplied Mathematics Philadelphia Pa USA 1994

[31] M Weimin Matrix completion models with fixed basis coef-ficients and rank regularized problems with hard constraints[PhD thesis] National University of Singapore 2013

[32] M Fazel Matrix rank minimization with applications [PhDthesis] Stanford University 2002

[33] K M Grigoriadis and E B Beran ldquoAlternating projectionalgorithms for linear matrix inequalities problems with rankconstraintsrdquo in Advances in Linear Matrix Inequality Methodsin Control pp 512ndash267 Society for Industrial and AppliedMathematics Philadelphia Pa USA 1999

[34] Y Gao and D Sun ldquoA majorized penalty approach for cali-brating rank constrained correlation matrix problemsrdquo 2010httpwwwmathnusedusgsimmatsundfMajorPenpdf

[35] S Burer R D C Monteiro and Y Zhang ldquoMaximum stable setformulations and heuristics based on continuous optimizationrdquoMathematical Programming vol 94 no 1 pp 137ndash166 2002

[36] R BhatiaMatrix Analysis Springer Berlin Germany 1997[37] Z Lu ldquoSequential convex programming methods for a class

of structured nonlinear programmingrdquo httpsarxivorgabs12103039

[38] R A Horn and C R Johnson Matrix Analysis CambridgeUniversity Press Cambridge UK 1985

[39] B Recht M Fazel and P A Parrilo ldquoGuaranteed minimum-rank solutions of linear matrix equations via nuclear normminimizationrdquo SIAM Review vol 52 no 3 pp 471ndash501 2010

[40] httpwwwstanfordedusimyyyeyyyeGset[41] C Helmberg and F Rendl ldquoA spectral bundle method for

semidefinite programmingrdquo SIAM Journal on Optimizationvol 10 no 3 pp 673ndash696 2000

[42] M X Goemans and D P Williamson ldquoImproved approxima-tion algorithms for maximum cut and satisfiability problemsusing semidefinite programmingrdquo Journal of ACM vol 42 no6 pp 1115ndash1145 1995

[43] G A Kochenberger J-K Hao Z Lu H Wang and F GloverldquoSolving large scale max cut problems via tabu searchrdquo Journalof Heuristics vol 19 no 4 pp 565ndash571 2013

[44] G Palubeckis andVKrivickiene ldquoApplication ofmultistart tabusearch to the max-cut problemrdquo Information Technology andControl vol 31 no 2 pp 29ndash35 2004

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

2 Mathematical Problems in Engineering

[6] and finance [7 8] to name but a few There are sometypical examples of constrained rankminimization problemsfor example one is affinematrix rankminimization problems[5 9 10]

min 1

2A (119883) minus 119887

2

119865

st rank (119883) le 119896(1)

where 119883 isin 119877119898times119899 is the decision variable and the linear

map A 119877119898times119899

rarr 119877119901 and vector 119887 isin 119877

119901 are knownEquation (1) includes matrix completion based on affinerank minimization problem and compressed sensing Matrixcompletion is widely used in collaborative filtering [1112] system identification [13 14] sensor network [15 16]image processing [17 18] sparse channel estimation [19 20]spectrum sensing [21 22] and multimedia coding [23 24]Another is max-cut problems [25] in combinatorial optimi-zation

min Tr (119871119883)

st diag (119883) = 119890

rank (119883) = 1

119883 ⪰ 0

(2)

where119883 = 119909119909119879 119909 = (1199091 119909

119899)119879 119871 = (14)(diag(119882119890) minus119882)

119890 = (1 1 1)119879 119882 is the weight matrix and so on As a

matter of fact these problems can be written as a commonconvex constrained rank minimization model that is

min 119891 (119883)

st rank (119883) le 119896

119883 isin Γ cap Ω

(3)

where 119891 119877119898times119899 rarr 119877 is a continuously differentiable convexfunction rank(119883) is the rank of a matrix 119883 119896 ge 0 is a giveninteger Γ is a closed convex set and Ω is a closed unitarilyinvariant convex set According to [26] a set 119862 sube 119877119899times119899 is aunitarily invariant set if

119880119883119881119880 isin 119880119898 119881 isin 119880119899 119883 isin 119862 = 119862 (4)

where 119880119899 denotes the set of all unitary matrices in 119877119899times119899Various types of methods have been proposed to solve (3) orits special cases with different types of Γ cap Ω

When Γ cap Ω is 119877119898times119899 (3) is the unconstrained rankminimization problem One approach is to solve this case byconvex relaxation of rank(sdot) Cai et al [9] proposed singularvalue thresholding (SVT) algorithm Ma et al [10] proposedfixed point continuation (FPC) algorithm Another approachis to solve rank constraint directly Jain et al [27] proposedsimple and efficient algorithm Singular Value Projection(SVP) based on the projected gradient algorithm Haldarand Hernando [28] proposed an alternating least squaresapproach Keshavan et al [29] proposed an algorithm basedon optimization over a Grassmann manifold

When Γ cap Ω is a convex set (3) is a convex constrainedrank minimization problem One approach is to solve itby replacing rank constraint with other constraints such astrace constraint and norm constraint Based on this severalheuristicmethods have been proposed in literature for exam-ple Nesterov et al [30] proposed interior-point polynomialalgorithms Weimin [31] proposed an adaptive seminuclearnorm regularization approach in addition semidefinite pro-gramming (SDP) has also been applied see [32] Anotherapproach is to solve rank constraint directly Grigoriadisand Beran [33] proposed alternating projection algorithmsfor linear matrix inequalities problems Mohseni et al [24]proposed penalty decomposition (PD) method based onpenalty function Gao and Sun [34] recently proposed themajorized penalty approach Burer et al [35] proposednonlinear programming (NLP) reformulation approach

In this paper we consider problem (3) We introduce anew variable and penalize equality constraints to objectivefunction By this technique we can obtain closed-formsolution and use it to reformulate the objective function inproblem (3) as a difference of convex functions then (3) isreformulated as DC programming which can be solved vialinear approximation method A function is called DC if itcan be represented as the difference of two convex functionsMathematical programming problems dealingwithDC func-tions are called DC programming problems Our method isdifferent from original PD [26] for one thing we penalizeequality constraints to objective function and keep otherconstraints while PD penalizes all except rank constraintto objective function including equality and inequality con-straints for another each subproblem of PD is approximatelysolved by a block coordinate descent method while weapproximate problem (3) to a convex optimization to besolved finally Compared with PD method our method usesthe closed-form solutions to remove rank constraint whilePD needs to consider rank constraint in each iteration It isknown that rank constraint is a nonconvex and discontinuousconstraint which is not easy to deal And due to theremaining convex constraints the formulated programmingcan be solved by solving a series of convex programmingIn our method we use linear function instead of the lastconvex function to formulate the programming to convexprogramming We test the performance of our method byapplying it to affine rank minimization problems and max-cut problems Numerical results show that the algorithm isfeasible and effective

The rest of this paper is organized as follows In Section 11we introduce the notation that is used throughout the paperOur method is shown in Section 2 In Section 3 we apply itto some practical problems to test its performance

11 Notations In this paper the symbol 119877119899 denotes the 119899-dimensional Euclidean space and the set of all119898times119899matriceswith real entries is denoted by 119877119898times119899 Given matrices119883 and 119884in 119877119898times119899 the standard inner product is defined by ⟨119883 119884⟩ flTr(119883119884119879) where Tr(sdot) denotes the trace of a matrix TheFrobenius norm of a real matrix 119883 is defined as 119883

119865fl

radicTr(119883119883119879) 119883lowastdenotes the nuclear norm of 119883 that is the

sumof singular values of119883The rank of amatrix119883 is denoted

Mathematical Problems in Engineering 3

by rank(119883) We use 119868119902to denote the identity matrix whose

dimension is 119902Throughout this paper we always assume thatthe singular values are arranged in nonincreasing order thatis 1205901ge 1205902ge sdot sdot sdot ge 120590

119903gt 0 = 120590

119903+1= sdot sdot sdot = 120590min119898119899 120597119891 denotes

the subdifferential of the function 119891 We denote the adjointof 119860 by 119860lowast 119860119860119879

(119896)is used to denote Ky Fan 119896-norms Let

119875Ωbe the projection onto the closed convex setΩ

2 An Efficient Algorithm Based onDC Programming

In this section we first reformulated problem (3) as aDCpro-gramming problem with the convex constrained set whichcan be solved via stepwise linear approximation methodThen the convergence is provided

21 Model Transformation Let119883 = 119884 Problem (3) is rewrit-ten as

min 119891 (119883)

st 119883 = 119884

rank (119884) le 119896

119883 isin Γ cap Ω

(5)

Adopting penalty function method to penalize 119883 = 119884 toobjective function and choosing proper penalty parameter 120588then (5) can be reformulated as

min 119891 (119883) +120588

2119883 minus 119884

2

119865

st rank (119884) le 119896

119883 isin Γ cap Ω

(6)

Solving 119884 with fixed119883 isin Γ cap Ω (6) can be treated as

min119883

119891 (119883) + minrank(119884)le119896

120588

2119883 minus 119884

2

119865

st 119883 isin Γ cap Ω

(7)

In fact given the singular value decomposition 119883 = 119880119878119881119879and singular values of 119883 120590

119895(119883) 119895 = 1 min119898 119899 where

119880 isin 119877119898times119898and119881 isin 119877119899times119899 are orthogonalmatrices theminimal

optimal problem with respect to 119884 with 119883 fixed in (7) thatis minrank(119884)le119896(1205882)119883 minus 119884

2

119865 has closed-form solution 119884 =

119880119862119881119879 with

119862 = (

119863 0

0 0) (8)

where 119863 = diag(1205901 1205902 1205903 120590

119896) isin 119877

119896times119896 Thus by theclosed-form solution and the unitary invariance of 119865-normwe have

minrank(119884)le119896

120588

2119883 minus 119884

2

119865= min

rank(119884)le119896

120588

2

10038171003817100381710038171003817119880119879119883119881 minus 119880

11987911988411988110038171003817100381710038171003817

2

119865

=120588

2119878 minus 119862

2

119865

=120588

21198832

119865minus120588

2

119896

sum

119894=1

1205902

119894(119883)

(9)

Substituting the above into problem (7) problem (3) is refor-mulated as

min 119891 (119883) +120588

21198832

119865minus120588

2

119896

sum

119894=1

1205902

119894(119883)

st 119883 isin Γ cap Ω

(10)

Clearly 119891(119883) + (1205882)1198832119865is a convex function about 119883

Define function 119892 119877119898times119899 rarr 119877 119892(119883) = sum119896119894=11205902

119894(119883) 1 le 119896 le

min119898 119899 Next we will study the property of 119892(119883) Problem(3) would be reformulated as a DC programming problemwith convex constraints if 119892(119883) is a convex function whichcan be solved via stepwise linear approximation method Infact it does hold and 119892(119883) equals a trace function

Theorem 1 Let

119882 = (

119868119896119874

119874 119874

) isin 119877119898times119898

(11)

where 119874 denotes zero matrix whose dimension is determineddepending on context then for any 119883 = 119880119878119881

119879isin 119877119898times119899 the

following hold

(1) 119892(119883) is a convex function

(2) 119892(119883) = Tr(119883119879119880119882119880119879119883)

Proof (1) Since 1205902119895(119883) 119895 = 1 min(119898 119899) are eigenvalues

of119883119883119879 then 119892(119883) = sum119896119894=11205902

119894(119883) = 119883119883

119879(119896) where sdot

(119896)is

Ky Fan 119896-norms (defined in [36]) Clearly sdot (119896)

is a convexfunction Let 120593(sdot) = sdot

(119896) 1 le 119896 le min119898 119899 For forall119860 119861 isin

119877119898times119899 and 120572 isin [0 1] we have

119892 (120572119860 + (1 minus 120572) 119861)

= 120593 ((120572119860 + (1 minus 120572) 119861) (120572119860 + (1 minus 120572) 119861)119879)

= 120593 (1205722119860119860119879+ 120572 (1 minus 120572)119860119861

119879+ 120572 (1 minus 120572) 119861119860

119879

+ (1 minus 120572)2119861119861119879) le 1205722120593 (119860119860

119879) + 120572 (1 minus 120572)

sdot 120593 (119860119861119879) + 120572 (1 minus 120572) 120593 (119861119860

119879) + (1 minus 120572)

2120593 (119861119861

119879)

(12)

4 Mathematical Problems in Engineering

Note that 119861119860119879 = (119860119861119879)119879 thus they have the same nonzerosingular values and then 120593(119861119860119879) = 120593(119860119861119879) holds Togetherwith (12) we have

120593 ((120572119860 + (1 minus 120572) 119861) (120572119860 + (1 minus 120572) 119861)119879)

le 1205722120593 (119860119860

119879) + 2120572 (1 minus 120572) 120593 (119860119861

119879)

+ (1 minus 120572)2120593 (119861119861

119879)

(13)

It is known that for arbitrary matrices 119860 119861 isin 119877119898times119898

120590119895(119860lowast119861) le

1

2120590119895(119860lowast119860 + 119861

lowast119861)

1 le 119895 le 119898 (see Theorem IX42 in [36]) (14)

Further

119896

sum

119895=1

120590119895(119860119879119861) le

119896

sum

119895=1

1

2120590119895(119860lowast119860 + 119861

lowast119861) 1 le 119896 le 119899 (15)

That is

120593 (119860119861119879) le

1

2120593 (119860119860

119879+ 119861119861119879) (16)

where119860119860119879 and 119861119861119879 are symmetric matrices It is also knownthat for arbitrary Hermitianmatrices119860119860119879 119861119861119879 isin 119877119898times119898 119896 =1 2 119898 the following holds (see Exercise II115 in [24])

119896

sum

119895=1

120590119895(119860119860119879+ 119861119861119879) le

119896

sum

119895=1

120590119895(119860119860119879) +

119896

sum

119895=1

120590119895(119861119861119879) (17)

Thus

120593 (119860119879119860 + 119861

119879119861) = 120593 (119860119860

119879+ 119861119861119879)

le 120593 (119860119860119879) + 120593 (119861119861

119879)

(18)

Together with (13) and (16) (18) yields

119892 (120572119860 + (1 minus 120572) 119861) le 120572120593 (119860119860119879) + (1 minus 120572) 120593 (119861119861

119879)

= 120572119892 (119860) + (1 minus 120572) 119892 (119861)

(19)

Hence 119892(119883) = sum119896119894=11205902

119894(119883) is a convex function

(2) Let Θ = diag(1205901 1205902 1205903 120590

119903) and 119903 is the rank of119883

119896 le 119903

119878 = (

Θ 119874

119874 119874

) (20)

and we have

Tr (119883119879119880119882119880119879119883) = Tr (119881119878119879119880119879119880119882119880119879119880119878119881119879)

= Tr (119881119878119879119882119878119881119879) = Tr (119878119879119882119878) (21)

Moreover

119878119879

119882119878 = (

Θ 119874

119874 119874)

119879

(

119868119896119874

119874 119874)(

Θ 119874

119874 119874) = (

Θ 119874

119874 119874

) (22)

where Θ = diag (12059021 1205902

2 1205902

3 120590

2

119896)

Thus Tr(119878119879119882119878) = sum119896119894=11205902

119894(119883) = 119892(119883) holds

According to above theorem (10) can be rewritten as

min 119891 (119883) +120588

21198832

119865minus120588

2Tr (119883119879119880119882119880119879119883)

st 119883 isin Γ cap Ω

(23)

Since the objective function of (23) is a difference of convexfunctions the constrained set Γ cap Ω is a closed convex con-straint so (23) isDCprogrammingNext wewill use stepwiselinear approximate method to solve it

22 Stepwise Linear Approximative Algorithm In this subsec-tion we use stepwise linear approximation method [37] tosolve reformulated model (23) Let 119865(119883) represent objectivefunction of model (23) that is

119865 (119883) = 119891 (119883) +120588

21198832

119865minus120588

2Tr (119883119879119880119882119880119879119883) (24)

and the linear approximation function of119865(119883) can be definedas

119867(119883119883119894) = 119891 (119883) +

120588

21198832

119865

minus120588

2119892 (119883

119894) + ⟨120597119892 (119883

119894) 119883 minus 119883

119894⟩

(25)

where (120597119892(119883)120597119883)(120597Tr(119883119879119880119882119880119879119883)120597119883) = 2119880119882119880119879119883 (see522 in [36])

Stepwise linear approximative algorithm for solvingmodel (23) can be described as follows

Given the initial matrix 1198830 isin Γ cap Ω penalty parameter120588 gt 0 rank 119896 gt 0 120591 gt 1 set 119894 = 0

Step 1 Compute the single value decomposition of 119883119894119880119894Σ119894119881119894= 119883119894

Step 2 Update

119883119894+1isin arg min119883isinΓcapΩ

119867 (119883119883119894) (26)

Step 3 If stopping criterion is satisfied stop else set 119894 larr 119894+1

and 120588 larr (1 + 120591)120588 and go to Step 1

Remark

(1) Subproblem (26) can be done by solving a series ofconvex subproblemsThe efficient method for convexprogramming is multiple and it has perfect theoryguarantees

Mathematical Problems in Engineering 5

(a) When Γ cap Ω is 119877119898times119899 closed-form solutions canbe obtained for example when solving model(1) 119883119894+1 = (119860

lowast119860 + 120588119868)

minus1(120588119880119894119882(119880119894)119879119883119894) are

obtained by the first-order optimality condi-tions

0 isin 120597119891 (119883) + 120588119883 minus 120588119880119894119882(119880

119894)119879

119883119894 (27)

(b) When Γ cap Ω is a general convex set the closed-form solution is hard to get but subproblem(26) can be solved by some convex optimizationsoftware programs (such as CVX and CPLEX)According to different closed convex constraintsets we can pick and choose suitable convexprogramming algorithm to combine with ourmethod The toolbox listed in this paper is justan alternative choice

(2) The stop rule that we adopted here is 119883119894+1 minus119883119894119865le 120576

with some 120576 small enough

23 Convergence Analysis In this subsection we analyzethe convergence properties of the sequence 119883119894 generatedby above algorithm Before we prove the main convergenceresult we give the following lemma which is analogous toLemma 1 in [10]

Lemma 2 For any 1198841and 119884

2isin 119877119898times119899

100381710038171003817100381710038171198841minus 1198842

10038171003817100381710038171003817119865le10038171003817100381710038171198841 minus 1198842

1003817100381710038171003817119865 (28)

where 119884119894is the pre-119896 singular values approximate of 119884

119894(119894 =

1 2)

Proof Without loss of generality we assume 119898 le 119899 and thesingular value decomposition 119884

1= 11988011198781119881119879

11198842= 11988021198782119881119879

2

with

1198781= (

diag (120590) 00 0

) isin 119877119898times119899

1198782= (

diag (120574) 00 0

) isin 119877119898times119899

(29)

where 120590 = (1205901 1205902 120590

119904) 1205901ge sdot sdot sdot ge 120590

119904gt 0 and 120574 =

(1205741 1205742 120574

119905) 1205741ge sdot sdot sdot ge 120574

119905gt 0 119884

119894can be written as

119884119894= 119880119894119882119878119894119881119879

119894 (119894 = 1 2) Note that 119880

1 1198811 1198802 1198812are

orthogonal matrices we have

10038171003817100381710038171198841 minus 11988421003817100381710038171003817

2

119865minus100381710038171003817100381710038171198841minus 1198842

10038171003817100381710038171003817

2

119865

= Tr ((1198841minus 1198842)119879

(1198841minus 1198842))

minus Tr ((1198841minus 1198842)119879

(1198841minus 1198842))

= Tr (11988411987911198841minus 1198841

119879

1198841+ 119884119879

21198842minus 1198842

119879

1198842)

minus 2Tr (11988411987911198842minus 1198841

119879

1198842)

=

119904

sum

119894=1

1205902

119894minus

119896

sum

119894=1

1205902

119894+

119905

sum

119894=1

1205742

119894minus

119896

sum

119894=1

1205742

119894

minus 2Tr (11988411987911198842minus 1198841

119879

1198842)

(30)

And we note that

Tr (11988411987911198842minus 1198841

119879

1198842) = Tr ((119884

1minus 1198841)119879

(1198842minus 1198842)

+ (1198841minus 1198841)119879

1198842+ 1198841

119879

(1198842minus 1198842))

= Tr (1198811(1198781minus1198821198781)119879

119880119879

11198802(1198782minus1198821198782) 119881119879

2

+ 1198811(1198781minus1198821198781)119879

119880119879

111988021198821198782119881119879

2

+ 1198811(119882119878)119879119880119879

11198802(1198782minus1198821198782) 119881119879

2)

= Tr ((1198781minus1198821198781)119879

119880 (1198782minus1198821198782) 119881119879

+ (1198781minus1198821198781)119879

1198801198821198782119881119879

+ (1198821198781)119879

119880 (1198782minus1198821198782) 119881119879)

(31)

where 119880 = 11988011987911198802 119881 = 119881

119879

11198812are also orthogonal matrices

Next we estimate an upper bound for Tr(11988411987911198842minus 1198841

119879

1198842)

According to Theorem 749 in [38] we get that an orthog-onal matrix 119880 is a maximizing matrix for the problemmaxTr(119860119880)119880 is orthogonal if and only if 119860119880 is positivesemidefinite matrix Thus Tr((119878

1minus 119882119878

1)119879119880(1198782minus 119882119878

2)119881119879)

Tr((1198781minus 119882119878

1)1198791198801198821198782119881119879) and Tr((119882119878)119879119880(119878

2minus 119882119878

2)119881119879)

achieve their maximum if and only if (1198781minus 119882119878

1)119879119880(1198782minus

1198821198782)119881119879 (1198781minus 1198821198781)1198791198801198821198782119881119879 and (119882119878

1)119879119880(1198782minus 1198821198782)119881119879

are all positive semidefinite It is known that when 119860119861 ispositive semidefinite

Tr (119860119861) = sum119894

120590119894(119860119861) le sum

119894

120590119894(119860) 120590119894(119861) (32)

Applying (32) to above three terms we have

Tr ((1198781minus1198821198781)119879

119880 (1198782minus1198821198782) 119881119879)

le sum

119894

120590119894(1198781minus1198821198781) 120590119894(1198782minus1198821198782)

Tr ((1198781minus1198821198781)119879

1198801198821198782119881119879)

le sum

119894

120590119894(1198781minus1198821198781) 120590119894(1198821198782)

Tr ((1198821198781)119879

119880(1198782minus1198821198782) 119881119879)

le sum

119894

120590119894(1198821198781) 120590119894(1198782minus1198821198782)

(33)

6 Mathematical Problems in Engineering

Hence without loss of generality assuming 119896 le 119904 le 119905 we have

10038171003817100381710038171198841 minus 11988421003817100381710038171003817

2

119865minus100381710038171003817100381710038171198841minus 1198842

10038171003817100381710038171003817

2

119865

ge

119904

sum

119894=1

1205902

119894minus

119896

sum

119894=1

1205902

119894+

119905

sum

119894=1

1205742

119894minus

119896

sum

119894=1

1205742

119894minus 2(

119904

sum

119894=119896+1

120590119894120574119894)

=

119904

sum

119894=119896+1

(120590119894minus 120574119894)2

+

119905

sum

119894=119904+1

1205742

119894ge 0

(34)

which implies that 1198841minus 1198842119865le 1198841minus 1198842119865holds

Next we claim the value of iterations of stepwise linearapproximative algorithm is nonincreasing

Theorem 3 Let 119883119894 be the iterative sequence generated bystepwise linear approximative algorithmThen 119865(119883119894) is mon-otonically nonincreasing

Proof Since119883119894+1 isin argmin119867(119883119883119894) | 119883 isin Γ cap Ω we have

119867(119883119894+1 119883119894) le 119867(119883

119894 119883119894)

= 119891 (119883119894) +

120588

2

1003817100381710038171003817100381711988311989410038171003817100381710038171003817

2

119865minus120588

2119892 (119883119894)

= 119865 (119883119894)

(35)

and 120597119892(119883) is the subdifferential of 119892(119883) which implies

forall119883119884 119892 (119883) ge 119892 (119884) + ⟨120597119892 (119884) 119883 minus 119884⟩ (36)

Let119883 = 119883119894+1 119884 = 119883119894 in above inequality we have

119892 (119883119894+1) ge 119892 (119883

119894) + ⟨120597119892 (119883

119894) 119883119894+1minus 119883119894⟩ (37)

Therefore

119867(119883119894+1 119883119894) = 119891 (119883

119894+1) +

120588

2

10038171003817100381710038171003817119883119894+110038171003817100381710038171003817

2

119865

minus120588

2119892 (119883

119894) + ⟨120597119892 (119883

119894) 119883 minus 119883

119894⟩

ge 119891 (119883119894+1) +

120588

2

10038171003817100381710038171003817119883119894+110038171003817100381710038171003817

2

119865minus120588

2119892 (119883119894+1)

= 119865 (119883119894+1)

(38)

Consequently

119865 (119883119894+1) le 119867(119883

119894+1 119883119894) le 119867(119883

119894 119883119894) = 119865 (119883

119894) (39)

We immediately obtain the conclusion as desired

Next we show the convergence of iterative sequencegenerated by stepwise linear approximative algorithm whensolving (23)Theorem 4 shows the convergence when solving(23) with Γ cap Ω = 119877119898times119899

Theorem 4 Let Υ be the set of stable points of problem (23)with no constraints then the iterative sequence 119883119894 generatedby linear approximative algorithm converges to some119883119900119901119905 isin Υwhen 120588 rarr infin

Proof Assuming that 119883optisin Υ is any optimal solution of

problem (23) with Γ cap Ω = 119877119898times119899 according to the first-orderoptimality conditions we have

0 = 120597119891 (119883opt) + 120588119883

optminus 120588119880

opt119882(119880

opt)119879

119883opt (40)

Since 119883119894+1 isin argmin119891(119883) + (1205882)1198832119865minus (1205882)119892(119883

119894) +

⟨120597119892(119883119894) 119883 minus 119883

119894⟩ then

0 = 120597119891 (119883119894+1) + 120588119883

119894+1minus 120588119880119894119882(119880

119894)119879

119883119894 (41)

Equation (41) subtracts (40)

119883119894+1minus 119883

opt=1

120588(120597119891 (119883

119894+1) minus 120597119891 (119883

opt))

+ 119880opt119882(119880

opt)119879

119883opt

minus 119880119894119882(119880

119894)119879

119883119894

(42)

in which 119880opt119882(119880

opt)119879119883

optminus 119880

119894119882(119880119894)119879119883119894

=

119880opt119882sum

opt(119881

opt)119879minus 119880119894119882sum119894(119881

opt)119879

By Lemma 2 we have

100381710038171003817100381710038171003817119880

opt119882(119880

opt)119879

119883optminus 119880119894119882(119880

119894)119879

119883119894100381710038171003817100381710038171003817119865

=

1003817100381710038171003817100381710038171003817100381710038171003817

119880opt119882

opt

sum (119881opt)119879

minus 119880119894119882

119894

sum (119881119894)119879

1003817100381710038171003817100381710038171003817100381710038171003817119865

le10038171003817100381710038171003817119883119894minus 119883

opt10038171003817100381710038171003817119865

(43)

Norming both sides of (42) by the triangle inequalities prop-erty of norms we have

10038171003817100381710038171003817119883119894+1minus 119883

opt10038171003817100381710038171003817119865= lim120588rarrinfin

10038171003817100381710038171003817119883119894+1minus 119883

opt10038171003817100381710038171003817119865

le lim120588rarrinfin

(1

120588

10038171003817100381710038171003817120597119891 (119883

119894+1) minus 120597119891 (119883

opt)10038171003817100381710038171003817119865

+10038171003817100381710038171003817119883119894minus 119883

opt10038171003817100381710038171003817119865) = lim120588rarrinfin

1

120588

10038171003817100381710038171003817120597119891 (119883

119894+1)

minus 120597119891 (119883opt)10038171003817100381710038171003817119865+ lim120588rarrinfin

10038171003817100381710038171003817119883119894minus 119883

opt10038171003817100381710038171003817119865=10038171003817100381710038171003817119883119894

minus 119883opt10038171003817100381710038171003817119865

(44)

which implies that the sequence 119883119894 minus 119883opt119865 is monoton-

ically nonincreasing when 120588 rarr infin We then observe that119883119894minus 119883

opt119865 is bounded and hence lim

119894rarrinfin119883119894minus 119883

opt119865

exists denoted by 119883 minus 119883opt119865 where 119883 is the limited point

of 119883119894 Let 119894 rarr infin in (41) by the continuity we get

Mathematical Problems in Engineering 7

0 = 120597119891(119883) + 120588119883 minus 120588119880119882119880119879

119883 which implies that 119883 isin ΥBy setting119883 = 119883opt we have

lim119894rarrinfin

10038171003817100381710038171003817119883119894minus 119883

opt10038171003817100381710038171003817119865 =10038171003817100381710038171003817119883 minus 119883

opt10038171003817100381710038171003817119865 = 0 (45)

which completes the proof

Now we are ready to give the convergence of iterativesequence when solving (23) with general closed convex ΓcapΩ

Theorem 5 Let 120585 be the set of optimal solutions of problem(23) then the iterative sequence 119884119894 generated by stepwiselinear approximative algorithm converges to some 119884119900119901119905 isin 120585when 120588 rarr infin

Proof Generally speaking subproblem (26) can be solved byprojecting a solution obtained via solving a problem withno constraints onto a closed convex set that is 119884119894+1 =

119875ΓcapΩ(119883119894+1) where 119883119894+1 isin argmin119867(119883119883119894) According to

Theorem 4 the sequence 119883119894 minus 119883opt119865 is monotonically

nonincreasing and convergent when 120588 rarr infin together withthewell known fact that119875 is a nonexpansive operator we have

10038171003817100381710038171003817119884119894minus 119884

opt10038171003817100381710038171003817119865 =10038171003817100381710038171003817119875ΓcapΩ

(119883119894) minus 119875ΓcapΩ

(119883opt)10038171003817100381710038171003817119865

le10038171003817100381710038171003817119883119894minus 119883

opt10038171003817100381710038171003817119865 (46)

where119884opt is the projection of119883opt onto ΓcapΩ Clearly119884optisin

120585Hence 0 le lim

119894rarrinfin119884119894minus 119884

opt119865le lim119894rarrinfin

119883119894minus 119883

opt119865=

0 which implies that 119884119894 converge to 119884opt We immediatelyobtain the conclusion

3 Numerical Results

In this section we demonstrate the performance of ourmethod proposed in Section 2 by applying it to solve affinerank minimization problems (image reconstruction andmatrix completion) andmax-cut problemsThe codes imple-mented in this section are written in MATLAB and allexperiments are performed in MATLAB R2013a runningWindows 7

31 Image Reconstruction Problems In this subsection weapply our method to solve one case of affine rank minimiza-tion problems It can be formulated as

min 1

2A (119883) minus 119887

2

119865

st rank (119883) le 119896

119883 isin 119877119898times119899

(47)

where 119883 isin 119877119898times119899 is the decision variable and the linear mapA 119877

119898times119899rarr 119877119901 and vector 119887 isin 119877119901 are known Clearly (47)

is a special case of problem (3) Hence our method can besuitably applied to (47)

Recht et al [39] have demonstrated that when we samplelinearmaps froma class of probability distributions then they

0

50

100

150

200

250

300

SNR

(dB)

800 1000 1200 1400 1600 1800 2000 2200 2400600The number of linear constraints

GaussBinary

Sparse BinaryRandom Projection

Figure 1 Performance curves under four different measurementmatrices SNR versus the number of linear constraints

would obey the Restricted Isometry Property There are twoingredients for a random linear map to be nearly isometricFirst it must be isometric in expectation Second the prob-ability of large distortions of length must be exponentiallysmall In our numerical experiments we use four nearlyisometric measurements The first one is the ensemble withindependent identically distributed (iid) Gaussian entriesThe second one has entries sampled from an iid symmetricBernoulli distribution which we call BernoulliThe third onehas zeros in two-thirds of the entries and others sampledfrom an iid symmetric Bernoulli distribution which wecall Sparse Bernoulli The last one has an orthonormal basiswhich we call Random Projection

Now we conduct numerical experiments to test the per-formance of our method for solving (47) To illustrate thescaling of low rank recovery for a particular matrix 119883consider the MIT logo presented in [39] The image has 46rows and 81 columns (the total number of pixels is 46 times81 = 3726) Since the logo only has 5 distinct rows it hasrank 5 We sample it using Gaussian iid Bernoulli iidSparse Bernoulli iid and Random Projection measure-ments matrix with the number of linear constraints rangingbetween 700 and 2400 We declared MIT logo 119883 to berecovered if 119883 minus 119883opt

119865119883119865le 10minus3 that is SNR ge60 dB

(SNR = minus20 log(119883 minus 119883opt119865119883119865)) Figures 1 and 2 show

the results of our numerical experimentsFigure 1 plots the curves of SNR under four different

measurement matrices mentioned above with different num-bers of linear constraints The numerical values on the 119909-axisrepresent the number of linear constraints 119901 that is numberof measurements while those on 119910-axis represent the Signalto Noise Ration between the true original 119883 and 119883opt recov-ered by our method Noticeably Gauss and Sparse Binarycan successfully reconstruct MIT logo images from about1000 constraints while Random Projection requires about

8 Mathematical Problems in Engineering

800 900 1000 1100 1200 1300 1400 1500700The number of linear constraints

0

20

40

60

80

100

120

140SN

R (d

B)

GaussBinary

Sparse BinaryRandom Projection

Figure 2 Performance curves of nuclear norm heuristic methodunder four different measurementmatrices SNR versus the numberof linear constraints

1200 constraints Binary requires the minimal constraintsabout 950TheGauss and Sparse Binary measurements reachtheir maximal SNR of 220 dB at around 1500 constraintsthe Binary measurements can reach its maximal SNR of180 dB at around 1400 constraints Random Projection hasmore large phase transition point but has the best recoveryerror We observe a sharp transition to perfect recovery near1000 measurements lower than 2119896(119898 + 119899 minus 119896) Comparedwith Figure 2 which shows results of nuclear norm heuristicmethod in [39] and the method that is a kind of convexrelaxations there is a sharp transition to perfect recoverynear 1200 measurements which is approximately equal to2119896(119898 + 119899 minus 119896) The comparison shows our method hasbetter performance that it can recover an image with lessconstraints namely it needs less samples In addition weobserve that SNR of nuclear norm heuristic method achieves120 dB when the number of measurements approximatelyequals 1500 while ourmethod just needs 1200measurementswhich decrease by 20 compared with 1500 If the numberof measurements is 1500 accuracy of our method wouldbe higher Among precisions of four different measurementmatrixes the smallest one is precision of Random ProjectionHowever the precision value is also higher than 120 dB Theperformance illustrates that our method is more accurateObserving the curve of between 1200 and 1500 we can findthat our curve rises smoothly compared with the curve in thesame interval in Figure 2 this shows that ourmethod is stablefor reconstructing image

Figure 3 shows recovered images under four differentmeasurement matrices with 800 900 and 1000 linear con-straints And Figure 4 shows recovered images by ourmethodand that by SeDuMi We chose to use this interior-pointmethod because it yields the highest accuracy Distinctly therecovered images by SeDuMi have some shadow while oursis clear Moreover as far as time is concerned our methodis more fast and the computational time is not extremely

Table 1 DC versus SeDuMi in time (second)

119901 800 900 1000 1100 1200 1500DC 4682 5049 5078 5138 5196 5409SeDuMi 9326 12016 13824 15944 16176 24538

sensitive to119901 compared to SeDuMiThe reason is that closed-form solutions can be obtained when we solve this problemComparisons are shown in Table 1

32 Matrix Completion Problems In this subsection weapply our method to matrix completion problems It can bereformulated as

min 1

2

1003817100381710038171003817119875Ω (119883) minus 119875Ω (119872)1003817100381710038171003817

2

119865

st rank (119883) le 119896

119883 isin 119877119898times119899

(48)

where 119883 and 119872 are both 119898 times 119899 matrices and Ω is a subsetof index pairs (119894 119895) Up to now numerous methods wereproposed to solve the matrix completion problem (see eg[9 10 29 31]) It is easy to see that problem (48) is a specialcase of general rank minimization problem (3) with 119891(119883) =(12)119875

Ω(119883) minus 119875

Ω(119872)2

119865

In our experiments we first created random matrices119872119871isin 119877119898times119896 and 119872

119877isin 119877119898times119896 with iid Gaussian entries

and then set 119872 = 119872119871119872119879

119877 We sampled a subset Ω of 119901

entries uniformly at random We compare against singularvalue thresholding (SVT) method [9] FPC FPCA [10] andthe OptSpace (OPT) method [29] using

1003817100381710038171003817119875Ω (119883) minus 119875Ω (119872)1003817100381710038171003817119865

1003817100381710038171003817119875Ω (119872)1003817100381710038171003817119865

le 10minus3 (49)

as a stop criterion The maximum number of iterations is nomore than 1500 in all methods other parameters are defaultWe report result averaged over 10 runs

We use SR = 119901(119898119899) to denote sampling ratio andFR = 119896(119898 + 119899 minus 119896)119901 to represent the dimension of the setof rank 119896 matrices divided by the number of measurementsNS and AT denote number of matrices that are recoveredsuccessfully and average times (seconds) that are successfullysolved respectively RE represents the average relative error ofthe successful recovered matrices We use the relative error

1003817100381710038171003817119872 minus 119883lowast1003817100381710038171003817119865

119872119865 (50)

to estimate the closeness of119883lowast to119872 where119883lowast is the optimalsolution to (48) produced by our method We declared119872 tobe recovered if the relative error was less than 10minus3

Table 2 shows that our method has the close performancewith OptSpace in column of NS These performances arebetter than those of SVT FPC and FPCA However SVTFPC and FPCA cannot obtain the optimal matrix within1500 iterationswhich shows that ourmethodhas advantage infinding the neighborhood of optimal solutionThe results donot mean that these algorithms are invalid but themaximum

Mathematical Problems in Engineering 9

Table 2 Numerical results for problems with119898 = 119899 = 40 and SR = 05

Alg 119896 FR NS AT RE 119896 FR NS AT RADC

1 00988

10 002 123119864 minus 04

6 05550

9 012 464119864 minus 04

SVT 10 011 159119864 minus 04 0 mdash mdashFPC 9 013 652119864 minus 04 0 mdash mdashFPCA 10 003 398119864 minus 05 0 mdash mdashOptSpace 10 001 131119864 minus 04 10 021 405119864 minus 04

DC

2 01950

10 003 150119864 minus 04

7 06388

9 017 611119864 minus 04

SVT 9 018 183119864 minus 04 0 mdash mdashFPC 8 014 652119864 minus 04 0 mdash mdashFPCA 10 002 583119864 minus 05 0 mdash mdashOptSpace 10 001 151119864 minus 04 9 037 537119864 minus 04

DC

3 02888

10 005 268119864 minus 04

8 07200

7 028 790119864 minus 04

SVT 2 036 247119864 minus 04 0 mdash mdashFPC 5 025 858119864 minus 04 0 mdash mdashFPCA 10 006 138119864 minus 04 0 mdash mdashOptSpace 10 004 175119864 minus 04 9 082 668119864 minus 04

DC

4 03800

10 006 284119864 minus 04

9 07987

4 040 893119864 minus 04

SVT 1 062 138119864 minus 04 0 mdash mdashFPC 1 025 872119864 minus 04 0 mdash mdashFPCA 0 mdash mdash 0 mdash mdashOptSpace 10 005 268119864 minus 04 4 148 928119864 minus 04

DC

5 04688

10 008 382119864 minus 04

SVT 0 mdash mdashFPC 0 mdash mdashFPCA 0 mdash mdashOptSpace 10 009 316119864 minus 04

Table 3 Numerical results for problems with119898 = 119899 = 100 and SR = 02

Alg 119896 FR NS AT RE 119896 FR NS AT RADC

1 00995

10 030 191119864 minus 04

5 04875

10 150 505119864 minus 04

SVT 7 182 204119864 minus 04 0 mdash mdashFPC 10 314 806119864 minus 04 0 mdash mdashFPCA 10 012 173119864 minus 04 0 mdash mdashOptSpace 10 003 171119864 minus 04 10 058 557119864 minus 04

DC

2 01980

10 077 334119864 minus 04

6 05820

6 282 745119864 minus 04

SVT 0 mdash mdash 0 mdash mdashFPC 0 mdash mdash 0 mdash mdashFPCA 8 021 297119864 minus 04 0 mdash mdashOptSpace 10 005 227119864 minus 04 9 097 657119864 minus 04

DC

3 02955

10 064 303119864 minus 04

7 06755

5 324 766119864 minus 04

SVT 0 mdash mdash 0 mdash mdashFPC 0 mdash mdash 0 mdash mdashFPCA 6 009 220119864 minus 04 0 mdash mdashOptSpace 10 012 376119864 minus 04 5 189 795119864 minus 04

DC

4 03920

9 116 440119864 minus 04

SVT 0 mdash mdashFPC 0 mdash mdashFPCA 0 mdash mdashOptSpace 10 022 385119864 minus 04

10 Mathematical Problems in Engineering

Figure 3 Recovered images under four measurement matrices with different numbers of linear constraints From left to right the originalresults under Gauss Binary Sparse Binary and Random Projection measurement From top to bottom 119901 = 800 119901 = 900 and 119901 = 1000

Table 4 Numerical results for problems with119898 = 119899 = 100 and SR = 03

Alg 119896 FR NS AT RE 119896 FR NS AT RADC

1 00663

10 019 149E minus 04

7 04530

10 061 364E minus 04SVT 10 068 166E minus 04 0 mdash mdashFPC 10 133 447E minus 04 0 mdash mdashFPCA 10 017 612E minus 05 0 mdash mdashOptSpace 10 005 129E minus 04 10 047 339E minus 04DC

2 01320

10 020 169E minus 04

8 05120

10 087 470E minus 04SVT 8 139 166E minus 04 0 mdash mdashFPC 10 218 484E minus 04 0 mdash mdashFPCA 10 016 768E minus 05 0 mdash mdashOptSpace 10 004 148E minus 04 10 063 367E minus 04DC

3 01970

10 025 192E minus 04

9 05730

9 103 485E minus 04SVT 4 177 123E minus 04 0 mdash mdashFPC 9 432 582E minus 04 0 mdash mdashFPCA 9 019 121E minus 04 0 mdash mdashOptSpace 10 007 198E minus 04 10 140 487E minus 04DC

4 02613

10 032 230E minus 04

10 06333

8 136 621E minus 04SVT 4 982 467E minus 04 0 mdash mdashFPC 7 502 659E minus 04 0 mdash mdashFPCA 10 018 663E minus 04 0 mdash mdashOptSpace 10 009 208E minus 04 10 180 597E minus 04DC

5 03250

10 037 272E minus 04

11 06930

9 219 729E minus 04SVT 0 mdash mdash 0 mdash mdashFPC 1 818 734E minus 04 0 mdash mdashFPCA 10 010 220E minus 04 9 mdash mdashOptSpace 10 019 267E minus 04 5 411 705E minus 04DC

6 03880

10 047 300E minus 04

12 07520

6 283 879E minus 04SVT 0 mdash mdash 0 mdash mdashFPC 0 mdash mdash 0 mdash mdashFPCA 0 009 220E minus 04 0 mdash mdashOptSpace 10 025 325E minus 04 6 644 847E minus 04

number of iterations is limited to 1500 When the rank ofmatrix is large (ie 119896 ge 5) the average time of our algorithmis less than that of OptSpace In some cases the relative errorsof DC are minimal When the dimension of matrix becomeslargerDCmethodhas a better performance ofNS and similarrelative errors with that of OptSpace In terms of time ourtime is longer than that of OptSpace but shorter than thatof FPC and SVT Tables 3 and 4 show the above results In

conclusion our method has better performance in ldquohardrdquoproblems that is FR gt 034 The problems involved matricesthat are not of very low rank according to [10]

Table 5 gives the numerical results for problemswith largedimensions In these numerical experiments the maximumnumber of iterations is chosen by default of each comparedmethod We observe that our method can recover problemssuccessfully as other four methods while our method is

Mathematical Problems in Engineering 11

Figure 4 Recovered images under different numbers of linear con-straints From left to right the original results under our methodand results by SeDuMi From top to bottom 119901 = 800 119901 = 900 and119901 = 1000

Table 5 Numerical results for problems with different dimension

119898(119899) 119896 119901 SR FR Alg NS AT RA

100 10 5666 057 034

DC 10 014 191E minus 04SVT 10 281 185E minus 04FPC 10 523 183E minus 04FPCA 10 028 494E minus 05

OptSpace 10 09 141E minus 04

200 10 15665 039 025

DC 10 080 165E minus 04SVT 10 293 253E minus 04FPC 10 418 273E minus 04FPCA 10 008 688E minus 05

OptSpace 10 041 172E minus 04

300 10 27160 030 022

DC 10 272 167E minus 04SVT 10 257 179E minus 04FPC 10 98 163E minus 04FPCA 10 039 671E minus 05

OptSpace 10 185 150E minus 04

400 10 41269 026 019

DC 10 573 156E minus 04SVT 10 278 166E minus 04FPC 10 1328 130E minus 04FPCA 10 073 665E minus 05

OptSpace 10 347 146E minus 04

500 10 49471 020 02

DC 10 1363 173E minus 04SVT 10 294 174E minus 04FPC 10 2575 144E minus 04FPCA 10 112 107E minus 04

OptSpace 10 505 160E minus 04

1000 10 119406 012 017

DC 10 35641 170E minus 04SVT 10 456 166E minus 04FPC 10 11053 129E minus 04FPCA 10 516 164E minus 04

OptSpace 10 1461 150E minus 04

slower with the increase of dimension However our methodis faster than FPC Though SVT FPCA and OptSpace out-perform our method in terms of speed they are specifically

developedmethod formatrix completion problemswhile ourmethod can be applied to more class of problems To provethis we will apply our method to max-cut problems

33 Max-Cut Problems In this subsection we apply ourmethod tomax-cut problems It can be formulated as follows

min Tr (119871119883)st diag (119883) = 119890

rank (119883) = 1

119883 ⪰ 0

(51)

where119883 = 119909119909119879 119909 = (1199091 119909

119899)119879 119871 = (14)(diag(119882119890) minus119882)

119890 = (1 1 1)119879 and119882 is the weightmatrix Equation (51) is

one general case of problem (3) with Γ cap Ω = 119883 | diag(119883) =119890119883 isin 119878

+ Hence our method can be suitably applied to (51)

Wenowaddress the initialization and termination criteriafor our method when applied to (51) We set 1198830 = 119890119890119879 inaddition we choose the initial penalty parameter 120588 to be 001and set the parameter 120591 to be 11 And we use 119883119894+1minus119883119894

119865le 120576

as the termination criteria with 120598 = 10minus5In our numerical experiments we first test several small

scale max-cut problems then we perform our method onthe standard ldquo119866rdquo set problems written by Professor Ginaldi[40] In all we consider 10 problems ranging in size from800 variables to 1000 variables The results are shown inTables 6 and 7 in which ldquo119898rdquo and ldquo119896rdquo represent correspondingdimension of graph and the rank of solution respectivelyTheldquoobvrdquo denotes objective values of optimal matrix solutionsoperated by our method

In Table 6 ldquoKVrdquo denotes known optimal objective val-ues and ldquoUBrdquo presents upper bounds for max-cut by thesemidefinite relaxation It is easy to observe that our methodis effective for solving these problems and it has computed theglobal optimal solutions of all problems Moreover we obtainlower rank solution compared with semidefinite relaxationmethod

Table 7 shows the numerical results for standard ldquo119866rdquo setproblems ldquoSUBrdquo denotes upper bounds by spectral bundlemethod proposed by Helmberg and Rendl [41] ldquoGW-cutrdquodenotes the approximative algorithm proposed by Goemansand Williamson [42] ldquoDCrdquo denotes results generated byour method DC+ is the results of DC with neighborhoodapproximation algorithm and the right most column titledBK gives best results as we have known according to [43 44]We use ldquomdashrdquo to indicate that we do not obtain the rank FromTable 3 we can note that our optimal values are close tothe results of latest researches better than values given byGW-cut and they are all less than the corresponding upperbounds For the problems we all get the lower rank solutioncompared with GW-cut moreover most solutions satisfy theconstraints rank = 1 The results prove that our algorithm iseffective when used to solve max-cut problems

4 Concluding Remarks

In this paper we used closed-form of penalty function toreformulate convex constrained rankminimization problems

12 Mathematical Problems in Engineering

Table 6 Numerical results for famous max-cut problems

Graph 119898 KV GW-cut DC Optimal solutionsUB 119896 obv 119896

C5 5 4 4525 5 4 1 (minus1 1 minus1 1 minus1)K5 5 6 625 5 6 1 (minus1 1 minus1 1 minus1)AG 5 928 96040 5 928 1 (minus1 1 1 minus1 minus1)BG 12 88 903919 12 88 1 (minus1 1 1 minus1 minus1 minus1 minus1 minus1 minus1 1 1 1)

Table 7 Numerical results for G problems

Graph 119898 SUB GW-cut DC DC+ BKobv 119896 obv 119896

G01 800 12078 10612 14 11281 1 11518 11624G02 800 12084 10617 31 11261 1 11550 11620G03 800 12077 10610 14 11275 1 11540 11622G11 800 627 551 30 494 2 528 564G14 800 3187 2800 13 2806 1 2980 3063G15 800 3169 2785 70 2767 1 2969 3050G16 800 3172 2782 14 2800 1 2974 3052G43 1000 7027 6174 mdash 6321 1 6519 6660G45 1000 7020 6168 mdash 6348 1 6564 6654G52 1000 4009 3520 mdash 3383 1 3713 3849

as DC programming which could be solved by stepwiselinear approximative method Under some suitable assump-tions we showed that accumulation point of the sequencegenerated by the stepwise linear approximative method was astable point The numerical results on affine rank minimiza-tion and max-cut problems demonstrate that our method isgenerally comparable or superior to some existing methodsin terms of solution quality and it can be applied to a classof problems However the speed of our method needs to beimproved

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

The authors need to acknowledge the fund projectsWanpingYang and Jinkai Zhao are supported by Key Projects of Phi-losophy and Social Sciences Research (Ministry of EducationChina 15JZD012) Soft Science Fund of Shaanxi Province ofChina (2015KRM114) and the Fundamental Research Fundsfor the Central Universities Fengmin Xu is supported bythe Fundamental Research Funds for the Central Universitiesand China NSFC Projects (no 11101325)

References

[1] K-C Toh and S Yun ldquoAn accelerated proximal gradient algo-rithm for nuclear norm regularized linear least squares prob-lemsrdquo Pacific Journal of Optimization vol 6 no 3 pp 615ndash6402010

[2] Y Amit M Fink N Srebro and S Ullman ldquoUncovering sharedstructures in multiclass classificationrdquo in Proceedings of the 24thInternational Conference on Machine Learning (ICML rsquo07) pp17ndash24 ACM Corvallis Ore USA June 2007

[3] L El Ghaoui and P Gahinet ldquoRank minimization under LMIconstraints a framework for output feedback problemsrdquo in Pro-ceedings of the European Control Conference (ECC rsquo93) GahinetP Rank minimization under LMI constraints Groningen TheNetherlands June-July 1993

[4] M Fazel HHindi and S P Boyd ldquoA rankminimization heuris-tic with application to minimum order system approximationrdquoin Proceedings of the IEEE American Control Conference (ACCrsquo01) vol 6 pp 4734ndash4739 Arlington Va USA June 2001

[5] M W Trosset ldquoDistance matrix completion by numericaloptimizationrdquo Computational Optimization and Applicationsvol 17 no 1 pp 11ndash22 2000

[6] B Recht M Fazel and P A Parrilo ldquoGuaranteed minimum-rank solutions of linear matrix equations via nuclear normminimizationrdquo SIAM Review vol 52 no 3 pp 471ndash501 2010

[7] L Wu ldquoFast at-the-money calibration of the libor marketmodel through lagrange multipliersrdquo Journal of ComputationalFinance vol 6 no 2 pp 39ndash77 2003

[8] Z Zhang and L Wu ldquoOptimal low-rank approximation to acorrelationmatrixrdquo Linear Algebra and its Applications vol 364pp 161ndash187 2003

[9] J F Cai E J Candes andZ Shen ldquoA singular value thresholdingalgorithm for matrix completionrdquo SIAM Journal on Optimiza-tion vol 20 no 4 pp 1956ndash1982 2010

[10] S Ma D Goldfarb and L Chen ldquoFixed point and Bregmaniterative methods for matrix rank minimizationrdquoMathematicalProgramming vol 128 no 1-2 pp 321ndash353 2011

[11] K Barman and O Dabeer ldquoAnalysis of a collaborative filterbased on popularity amongst neighborsrdquo IEEE Transactions onInformation Theory vol 58 no 12 pp 7110ndash7134 2012

[12] S T Aditya O Dabeer and B K Dey ldquoA channel codingperspective of collaborative filteringrdquo IEEE Transactions onInformation Theory vol 57 no 4 pp 2327ndash2341 2011

[13] N E Zubov EAMikrinM SMisrikhanovVN RyabchenkoS N Timakov and E A Cheremnykh ldquoIdentification of theposition of an equilibrium attitude of the international spacestation as a problem of stable matrix completionrdquo Journal ofComputer and Systems Sciences International vol 51 no 2 pp291ndash305 2012

[14] T M Guerra and L Vermeiren ldquoLMI-based relaxed non-quadratic stabilization conditions for nonlinear systems in theTakagi-Sugenorsquos formrdquo Automatica vol 40 no 5 pp 823ndash8292004

[15] J J Meng W Yin H Li E Hossain and Z Han ldquoCollaborativespectrum sensing from sparse observations in cognitive radionetworksrdquo IEEE Journal on Selected Areas in Communicationsvol 29 no 2 pp 327ndash337 2011

Mathematical Problems in Engineering 13

[16] Z Q Dong S Anand and R Chandramouli ldquoEstimation ofmissing RTTs in computer networks matrix completion vscompressed sensingrdquo Computer Networks vol 55 no 15 pp3364ndash3375 2011

[17] JWright A Y Yang A Ganesh S S Sastry and YMa ldquoRobustface recognition via sparse representationrdquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 31 no 2 pp210ndash227 2009

[18] H Ji C Liu Z Shen and Y Xu ldquoRobust video denoisingusing Low rank matrix completionrdquo in Proceedings of the IEEEComputer Society Conference on Computer Vision and PatternRecognition (CVPR rsquo10) pp 1791ndash1798 San Francisco CalifUSA June 2010

[19] W U Bajwa J Haupt A M Sayeed and R Nowak ldquoCom-pressed channel sensing a new approach to estimating sparsemultipath channelsrdquo Proceedings of the IEEE vol 98 no 6 pp1058ndash1076 2010

[20] G Taubock F Hlawatsch D Eiwen and H Rauhut ldquoCom-pressive estimation of doubly selective channels in multicarriersystems leakage effects and sparsity-enhancing processingrdquoIEEE Journal on Selected Topics in Signal Processing vol 4 no2 pp 255ndash271 2010

[21] Y Tachwali W J Barnes F Basma and H H Refai ldquoThefeasibility of a fast fourier sampling technique for wirelessmicrophone detection in IEEE 80222 air interfacerdquo in Pro-ceedings of the IEEE Conference on Computer CommunicationsWorkshops (INFOCOM rsquo02) pp 1ndash5 IEEE San Diego CalifUSA March 2010

[22] J Meng W Yin H Li E Houssain and Z Han ldquoCollabora-tive spectrum sensing from sparse observations using matrixcompletion for cognitive radio networksrdquo in Proceedings of theIEEE International Conference on Acoustics Speech and SignalProcessing (ICASSP rsquo10) pp 3114ndash3117 Dallas Tex USA March2010

[23] S Pudlewski and T Melodia ldquoOn the performance of compres-sive video streaming for wireless multimedia sensor networksrdquoin Proceedings of the IEEE International Conference on Commu-nications (ICC rsquo10) pp 1ndash5 IEEECapeTown SouthAfricaMay2010

[24] A H Mohseni M Babaie-Zadeh and C Jutten ldquoInflatingcompressed samples a joint source-channel coding approachfor noise-resistant compressed sensingrdquo in Proceedings of theIEEE International Conference on Acoustics Speech and SignalProcessing (ICASSP rsquo09) pp 2957ndash2960 IEEE Taipei TaiwanApril 2009

[25] S J Benson Y Ye and X Zhang ldquoSolving large-scale sparsesemidefinite programs for combinatorial optimizationrdquo SIAMJournal on Optimization vol 10 no 2 pp 443ndash461 2000

[26] Z Lu Y Zhang and X Li ldquoPenalty decomposition methods forrank minimizationrdquo Optimization Methods and Software vol30 no 3 pp 531ndash558 2015

[27] P Jain R Meka and I S Dhillon ldquoGuaranteed rank mini-mization via singular value projectionrdquo in Advances in NeuralInformation Processing Systems pp 937ndash945 2010

[28] J P Haldar and D Hernando ldquoRank-constrained solutions tolinear matrix equations using powerfactorizationrdquo IEEE SignalProcessing Letters vol 16 no 7 pp 584ndash587 2009

[29] R H Keshavan A Montanari and S Oh ldquoMatrix completionfrom a few entriesrdquo IEEE Transactions on Information Theoryvol 56 no 6 pp 2980ndash2998 2010

[30] YNesterov ANemirovskii andY Ye Interior-Point PolynomialAlgorithms in Convex Programming Society for Industrial andApplied Mathematics Philadelphia Pa USA 1994

[31] M Weimin Matrix completion models with fixed basis coef-ficients and rank regularized problems with hard constraints[PhD thesis] National University of Singapore 2013

[32] M Fazel Matrix rank minimization with applications [PhDthesis] Stanford University 2002

[33] K M Grigoriadis and E B Beran ldquoAlternating projectionalgorithms for linear matrix inequalities problems with rankconstraintsrdquo in Advances in Linear Matrix Inequality Methodsin Control pp 512ndash267 Society for Industrial and AppliedMathematics Philadelphia Pa USA 1999

[34] Y Gao and D Sun ldquoA majorized penalty approach for cali-brating rank constrained correlation matrix problemsrdquo 2010httpwwwmathnusedusgsimmatsundfMajorPenpdf

[35] S Burer R D C Monteiro and Y Zhang ldquoMaximum stable setformulations and heuristics based on continuous optimizationrdquoMathematical Programming vol 94 no 1 pp 137ndash166 2002

[36] R BhatiaMatrix Analysis Springer Berlin Germany 1997[37] Z Lu ldquoSequential convex programming methods for a class

of structured nonlinear programmingrdquo httpsarxivorgabs12103039

[38] R A Horn and C R Johnson Matrix Analysis CambridgeUniversity Press Cambridge UK 1985

[39] B Recht M Fazel and P A Parrilo ldquoGuaranteed minimum-rank solutions of linear matrix equations via nuclear normminimizationrdquo SIAM Review vol 52 no 3 pp 471ndash501 2010

[40] httpwwwstanfordedusimyyyeyyyeGset[41] C Helmberg and F Rendl ldquoA spectral bundle method for

semidefinite programmingrdquo SIAM Journal on Optimizationvol 10 no 3 pp 673ndash696 2000

[42] M X Goemans and D P Williamson ldquoImproved approxima-tion algorithms for maximum cut and satisfiability problemsusing semidefinite programmingrdquo Journal of ACM vol 42 no6 pp 1115ndash1145 1995

[43] G A Kochenberger J-K Hao Z Lu H Wang and F GloverldquoSolving large scale max cut problems via tabu searchrdquo Journalof Heuristics vol 19 no 4 pp 565ndash571 2013

[44] G Palubeckis andVKrivickiene ldquoApplication ofmultistart tabusearch to the max-cut problemrdquo Information Technology andControl vol 31 no 2 pp 29ndash35 2004

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Mathematical Problems in Engineering 3

by rank(119883) We use 119868119902to denote the identity matrix whose

dimension is 119902Throughout this paper we always assume thatthe singular values are arranged in nonincreasing order thatis 1205901ge 1205902ge sdot sdot sdot ge 120590

119903gt 0 = 120590

119903+1= sdot sdot sdot = 120590min119898119899 120597119891 denotes

the subdifferential of the function 119891 We denote the adjointof 119860 by 119860lowast 119860119860119879

(119896)is used to denote Ky Fan 119896-norms Let

119875Ωbe the projection onto the closed convex setΩ

2 An Efficient Algorithm Based onDC Programming

In this section we first reformulated problem (3) as aDCpro-gramming problem with the convex constrained set whichcan be solved via stepwise linear approximation methodThen the convergence is provided

21 Model Transformation Let119883 = 119884 Problem (3) is rewrit-ten as

min 119891 (119883)

st 119883 = 119884

rank (119884) le 119896

119883 isin Γ cap Ω

(5)

Adopting penalty function method to penalize 119883 = 119884 toobjective function and choosing proper penalty parameter 120588then (5) can be reformulated as

min 119891 (119883) +120588

2119883 minus 119884

2

119865

st rank (119884) le 119896

119883 isin Γ cap Ω

(6)

Solving 119884 with fixed119883 isin Γ cap Ω (6) can be treated as

min119883

119891 (119883) + minrank(119884)le119896

120588

2119883 minus 119884

2

119865

st 119883 isin Γ cap Ω

(7)

In fact given the singular value decomposition 119883 = 119880119878119881119879and singular values of 119883 120590

119895(119883) 119895 = 1 min119898 119899 where

119880 isin 119877119898times119898and119881 isin 119877119899times119899 are orthogonalmatrices theminimal

optimal problem with respect to 119884 with 119883 fixed in (7) thatis minrank(119884)le119896(1205882)119883 minus 119884

2

119865 has closed-form solution 119884 =

119880119862119881119879 with

119862 = (

119863 0

0 0) (8)

where 119863 = diag(1205901 1205902 1205903 120590

119896) isin 119877

119896times119896 Thus by theclosed-form solution and the unitary invariance of 119865-normwe have

minrank(119884)le119896

120588

2119883 minus 119884

2

119865= min

rank(119884)le119896

120588

2

10038171003817100381710038171003817119880119879119883119881 minus 119880

11987911988411988110038171003817100381710038171003817

2

119865

=120588

2119878 minus 119862

2

119865

=120588

21198832

119865minus120588

2

119896

sum

119894=1

1205902

119894(119883)

(9)

Substituting the above into problem (7) problem (3) is refor-mulated as

min 119891 (119883) +120588

21198832

119865minus120588

2

119896

sum

119894=1

1205902

119894(119883)

st 119883 isin Γ cap Ω

(10)

Clearly 119891(119883) + (1205882)1198832119865is a convex function about 119883

Define function 119892 119877119898times119899 rarr 119877 119892(119883) = sum119896119894=11205902

119894(119883) 1 le 119896 le

min119898 119899 Next we will study the property of 119892(119883) Problem(3) would be reformulated as a DC programming problemwith convex constraints if 119892(119883) is a convex function whichcan be solved via stepwise linear approximation method Infact it does hold and 119892(119883) equals a trace function

Theorem 1 Let

119882 = (

119868119896119874

119874 119874

) isin 119877119898times119898

(11)

where 119874 denotes zero matrix whose dimension is determineddepending on context then for any 119883 = 119880119878119881

119879isin 119877119898times119899 the

following hold

(1) 119892(119883) is a convex function

(2) 119892(119883) = Tr(119883119879119880119882119880119879119883)

Proof (1) Since 1205902119895(119883) 119895 = 1 min(119898 119899) are eigenvalues

of119883119883119879 then 119892(119883) = sum119896119894=11205902

119894(119883) = 119883119883

119879(119896) where sdot

(119896)is

Ky Fan 119896-norms (defined in [36]) Clearly sdot (119896)

is a convexfunction Let 120593(sdot) = sdot

(119896) 1 le 119896 le min119898 119899 For forall119860 119861 isin

119877119898times119899 and 120572 isin [0 1] we have

119892 (120572119860 + (1 minus 120572) 119861)

= 120593 ((120572119860 + (1 minus 120572) 119861) (120572119860 + (1 minus 120572) 119861)119879)

= 120593 (1205722119860119860119879+ 120572 (1 minus 120572)119860119861

119879+ 120572 (1 minus 120572) 119861119860

119879

+ (1 minus 120572)2119861119861119879) le 1205722120593 (119860119860

119879) + 120572 (1 minus 120572)

sdot 120593 (119860119861119879) + 120572 (1 minus 120572) 120593 (119861119860

119879) + (1 minus 120572)

2120593 (119861119861

119879)

(12)

4 Mathematical Problems in Engineering

Note that 119861119860119879 = (119860119861119879)119879 thus they have the same nonzerosingular values and then 120593(119861119860119879) = 120593(119860119861119879) holds Togetherwith (12) we have

120593 ((120572119860 + (1 minus 120572) 119861) (120572119860 + (1 minus 120572) 119861)119879)

le 1205722120593 (119860119860

119879) + 2120572 (1 minus 120572) 120593 (119860119861

119879)

+ (1 minus 120572)2120593 (119861119861

119879)

(13)

It is known that for arbitrary matrices 119860 119861 isin 119877119898times119898

120590119895(119860lowast119861) le

1

2120590119895(119860lowast119860 + 119861

lowast119861)

1 le 119895 le 119898 (see Theorem IX42 in [36]) (14)

Further

119896

sum

119895=1

120590119895(119860119879119861) le

119896

sum

119895=1

1

2120590119895(119860lowast119860 + 119861

lowast119861) 1 le 119896 le 119899 (15)

That is

120593 (119860119861119879) le

1

2120593 (119860119860

119879+ 119861119861119879) (16)

where119860119860119879 and 119861119861119879 are symmetric matrices It is also knownthat for arbitrary Hermitianmatrices119860119860119879 119861119861119879 isin 119877119898times119898 119896 =1 2 119898 the following holds (see Exercise II115 in [24])

119896

sum

119895=1

120590119895(119860119860119879+ 119861119861119879) le

119896

sum

119895=1

120590119895(119860119860119879) +

119896

sum

119895=1

120590119895(119861119861119879) (17)

Thus

120593 (119860119879119860 + 119861

119879119861) = 120593 (119860119860

119879+ 119861119861119879)

le 120593 (119860119860119879) + 120593 (119861119861

119879)

(18)

Together with (13) and (16) (18) yields

119892 (120572119860 + (1 minus 120572) 119861) le 120572120593 (119860119860119879) + (1 minus 120572) 120593 (119861119861

119879)

= 120572119892 (119860) + (1 minus 120572) 119892 (119861)

(19)

Hence 119892(119883) = sum119896119894=11205902

119894(119883) is a convex function

(2) Let Θ = diag(1205901 1205902 1205903 120590

119903) and 119903 is the rank of119883

119896 le 119903

119878 = (

Θ 119874

119874 119874

) (20)

and we have

Tr (119883119879119880119882119880119879119883) = Tr (119881119878119879119880119879119880119882119880119879119880119878119881119879)

= Tr (119881119878119879119882119878119881119879) = Tr (119878119879119882119878) (21)

Moreover

119878119879

119882119878 = (

Θ 119874

119874 119874)

119879

(

119868119896119874

119874 119874)(

Θ 119874

119874 119874) = (

Θ 119874

119874 119874

) (22)

where Θ = diag (12059021 1205902

2 1205902

3 120590

2

119896)

Thus Tr(119878119879119882119878) = sum119896119894=11205902

119894(119883) = 119892(119883) holds

According to above theorem (10) can be rewritten as

min 119891 (119883) +120588

21198832

119865minus120588

2Tr (119883119879119880119882119880119879119883)

st 119883 isin Γ cap Ω

(23)

Since the objective function of (23) is a difference of convexfunctions the constrained set Γ cap Ω is a closed convex con-straint so (23) isDCprogrammingNext wewill use stepwiselinear approximate method to solve it

22 Stepwise Linear Approximative Algorithm In this subsec-tion we use stepwise linear approximation method [37] tosolve reformulated model (23) Let 119865(119883) represent objectivefunction of model (23) that is

119865 (119883) = 119891 (119883) +120588

21198832

119865minus120588

2Tr (119883119879119880119882119880119879119883) (24)

and the linear approximation function of119865(119883) can be definedas

119867(119883119883119894) = 119891 (119883) +

120588

21198832

119865

minus120588

2119892 (119883

119894) + ⟨120597119892 (119883

119894) 119883 minus 119883

119894⟩

(25)

where (120597119892(119883)120597119883)(120597Tr(119883119879119880119882119880119879119883)120597119883) = 2119880119882119880119879119883 (see522 in [36])

Stepwise linear approximative algorithm for solvingmodel (23) can be described as follows

Given the initial matrix 1198830 isin Γ cap Ω penalty parameter120588 gt 0 rank 119896 gt 0 120591 gt 1 set 119894 = 0

Step 1 Compute the single value decomposition of 119883119894119880119894Σ119894119881119894= 119883119894

Step 2 Update

119883119894+1isin arg min119883isinΓcapΩ

119867 (119883119883119894) (26)

Step 3 If stopping criterion is satisfied stop else set 119894 larr 119894+1

and 120588 larr (1 + 120591)120588 and go to Step 1

Remark

(1) Subproblem (26) can be done by solving a series ofconvex subproblemsThe efficient method for convexprogramming is multiple and it has perfect theoryguarantees

Mathematical Problems in Engineering 5

(a) When Γ cap Ω is 119877119898times119899 closed-form solutions canbe obtained for example when solving model(1) 119883119894+1 = (119860

lowast119860 + 120588119868)

minus1(120588119880119894119882(119880119894)119879119883119894) are

obtained by the first-order optimality condi-tions

0 isin 120597119891 (119883) + 120588119883 minus 120588119880119894119882(119880

119894)119879

119883119894 (27)

(b) When Γ cap Ω is a general convex set the closed-form solution is hard to get but subproblem(26) can be solved by some convex optimizationsoftware programs (such as CVX and CPLEX)According to different closed convex constraintsets we can pick and choose suitable convexprogramming algorithm to combine with ourmethod The toolbox listed in this paper is justan alternative choice

(2) The stop rule that we adopted here is 119883119894+1 minus119883119894119865le 120576

with some 120576 small enough

23 Convergence Analysis In this subsection we analyzethe convergence properties of the sequence 119883119894 generatedby above algorithm Before we prove the main convergenceresult we give the following lemma which is analogous toLemma 1 in [10]

Lemma 2 For any 1198841and 119884

2isin 119877119898times119899

100381710038171003817100381710038171198841minus 1198842

10038171003817100381710038171003817119865le10038171003817100381710038171198841 minus 1198842

1003817100381710038171003817119865 (28)

where 119884119894is the pre-119896 singular values approximate of 119884

119894(119894 =

1 2)

Proof Without loss of generality we assume 119898 le 119899 and thesingular value decomposition 119884

1= 11988011198781119881119879

11198842= 11988021198782119881119879

2

with

1198781= (

diag (120590) 00 0

) isin 119877119898times119899

1198782= (

diag (120574) 00 0

) isin 119877119898times119899

(29)

where 120590 = (1205901 1205902 120590

119904) 1205901ge sdot sdot sdot ge 120590

119904gt 0 and 120574 =

(1205741 1205742 120574

119905) 1205741ge sdot sdot sdot ge 120574

119905gt 0 119884

119894can be written as

119884119894= 119880119894119882119878119894119881119879

119894 (119894 = 1 2) Note that 119880

1 1198811 1198802 1198812are

orthogonal matrices we have

10038171003817100381710038171198841 minus 11988421003817100381710038171003817

2

119865minus100381710038171003817100381710038171198841minus 1198842

10038171003817100381710038171003817

2

119865

= Tr ((1198841minus 1198842)119879

(1198841minus 1198842))

minus Tr ((1198841minus 1198842)119879

(1198841minus 1198842))

= Tr (11988411987911198841minus 1198841

119879

1198841+ 119884119879

21198842minus 1198842

119879

1198842)

minus 2Tr (11988411987911198842minus 1198841

119879

1198842)

=

119904

sum

119894=1

1205902

119894minus

119896

sum

119894=1

1205902

119894+

119905

sum

119894=1

1205742

119894minus

119896

sum

119894=1

1205742

119894

minus 2Tr (11988411987911198842minus 1198841

119879

1198842)

(30)

And we note that

Tr (11988411987911198842minus 1198841

119879

1198842) = Tr ((119884

1minus 1198841)119879

(1198842minus 1198842)

+ (1198841minus 1198841)119879

1198842+ 1198841

119879

(1198842minus 1198842))

= Tr (1198811(1198781minus1198821198781)119879

119880119879

11198802(1198782minus1198821198782) 119881119879

2

+ 1198811(1198781minus1198821198781)119879

119880119879

111988021198821198782119881119879

2

+ 1198811(119882119878)119879119880119879

11198802(1198782minus1198821198782) 119881119879

2)

= Tr ((1198781minus1198821198781)119879

119880 (1198782minus1198821198782) 119881119879

+ (1198781minus1198821198781)119879

1198801198821198782119881119879

+ (1198821198781)119879

119880 (1198782minus1198821198782) 119881119879)

(31)

where 119880 = 11988011987911198802 119881 = 119881

119879

11198812are also orthogonal matrices

Next we estimate an upper bound for Tr(11988411987911198842minus 1198841

119879

1198842)

According to Theorem 749 in [38] we get that an orthog-onal matrix 119880 is a maximizing matrix for the problemmaxTr(119860119880)119880 is orthogonal if and only if 119860119880 is positivesemidefinite matrix Thus Tr((119878

1minus 119882119878

1)119879119880(1198782minus 119882119878

2)119881119879)

Tr((1198781minus 119882119878

1)1198791198801198821198782119881119879) and Tr((119882119878)119879119880(119878

2minus 119882119878

2)119881119879)

achieve their maximum if and only if (1198781minus 119882119878

1)119879119880(1198782minus

1198821198782)119881119879 (1198781minus 1198821198781)1198791198801198821198782119881119879 and (119882119878

1)119879119880(1198782minus 1198821198782)119881119879

are all positive semidefinite It is known that when 119860119861 ispositive semidefinite

Tr (119860119861) = sum119894

120590119894(119860119861) le sum

119894

120590119894(119860) 120590119894(119861) (32)

Applying (32) to above three terms we have

Tr ((1198781minus1198821198781)119879

119880 (1198782minus1198821198782) 119881119879)

le sum

119894

120590119894(1198781minus1198821198781) 120590119894(1198782minus1198821198782)

Tr ((1198781minus1198821198781)119879

1198801198821198782119881119879)

le sum

119894

120590119894(1198781minus1198821198781) 120590119894(1198821198782)

Tr ((1198821198781)119879

119880(1198782minus1198821198782) 119881119879)

le sum

119894

120590119894(1198821198781) 120590119894(1198782minus1198821198782)

(33)

6 Mathematical Problems in Engineering

Hence without loss of generality assuming 119896 le 119904 le 119905 we have

10038171003817100381710038171198841 minus 11988421003817100381710038171003817

2

119865minus100381710038171003817100381710038171198841minus 1198842

10038171003817100381710038171003817

2

119865

ge

119904

sum

119894=1

1205902

119894minus

119896

sum

119894=1

1205902

119894+

119905

sum

119894=1

1205742

119894minus

119896

sum

119894=1

1205742

119894minus 2(

119904

sum

119894=119896+1

120590119894120574119894)

=

119904

sum

119894=119896+1

(120590119894minus 120574119894)2

+

119905

sum

119894=119904+1

1205742

119894ge 0

(34)

which implies that 1198841minus 1198842119865le 1198841minus 1198842119865holds

Next we claim the value of iterations of stepwise linearapproximative algorithm is nonincreasing

Theorem 3 Let 119883119894 be the iterative sequence generated bystepwise linear approximative algorithmThen 119865(119883119894) is mon-otonically nonincreasing

Proof Since119883119894+1 isin argmin119867(119883119883119894) | 119883 isin Γ cap Ω we have

119867(119883119894+1 119883119894) le 119867(119883

119894 119883119894)

= 119891 (119883119894) +

120588

2

1003817100381710038171003817100381711988311989410038171003817100381710038171003817

2

119865minus120588

2119892 (119883119894)

= 119865 (119883119894)

(35)

and 120597119892(119883) is the subdifferential of 119892(119883) which implies

forall119883119884 119892 (119883) ge 119892 (119884) + ⟨120597119892 (119884) 119883 minus 119884⟩ (36)

Let119883 = 119883119894+1 119884 = 119883119894 in above inequality we have

119892 (119883119894+1) ge 119892 (119883

119894) + ⟨120597119892 (119883

119894) 119883119894+1minus 119883119894⟩ (37)

Therefore

119867(119883119894+1 119883119894) = 119891 (119883

119894+1) +

120588

2

10038171003817100381710038171003817119883119894+110038171003817100381710038171003817

2

119865

minus120588

2119892 (119883

119894) + ⟨120597119892 (119883

119894) 119883 minus 119883

119894⟩

ge 119891 (119883119894+1) +

120588

2

10038171003817100381710038171003817119883119894+110038171003817100381710038171003817

2

119865minus120588

2119892 (119883119894+1)

= 119865 (119883119894+1)

(38)

Consequently

119865 (119883119894+1) le 119867(119883

119894+1 119883119894) le 119867(119883

119894 119883119894) = 119865 (119883

119894) (39)

We immediately obtain the conclusion as desired

Next we show the convergence of iterative sequencegenerated by stepwise linear approximative algorithm whensolving (23)Theorem 4 shows the convergence when solving(23) with Γ cap Ω = 119877119898times119899

Theorem 4 Let Υ be the set of stable points of problem (23)with no constraints then the iterative sequence 119883119894 generatedby linear approximative algorithm converges to some119883119900119901119905 isin Υwhen 120588 rarr infin

Proof Assuming that 119883optisin Υ is any optimal solution of

problem (23) with Γ cap Ω = 119877119898times119899 according to the first-orderoptimality conditions we have

0 = 120597119891 (119883opt) + 120588119883

optminus 120588119880

opt119882(119880

opt)119879

119883opt (40)

Since 119883119894+1 isin argmin119891(119883) + (1205882)1198832119865minus (1205882)119892(119883

119894) +

⟨120597119892(119883119894) 119883 minus 119883

119894⟩ then

0 = 120597119891 (119883119894+1) + 120588119883

119894+1minus 120588119880119894119882(119880

119894)119879

119883119894 (41)

Equation (41) subtracts (40)

119883119894+1minus 119883

opt=1

120588(120597119891 (119883

119894+1) minus 120597119891 (119883

opt))

+ 119880opt119882(119880

opt)119879

119883opt

minus 119880119894119882(119880

119894)119879

119883119894

(42)

in which 119880opt119882(119880

opt)119879119883

optminus 119880

119894119882(119880119894)119879119883119894

=

119880opt119882sum

opt(119881

opt)119879minus 119880119894119882sum119894(119881

opt)119879

By Lemma 2 we have

100381710038171003817100381710038171003817119880

opt119882(119880

opt)119879

119883optminus 119880119894119882(119880

119894)119879

119883119894100381710038171003817100381710038171003817119865

=

1003817100381710038171003817100381710038171003817100381710038171003817

119880opt119882

opt

sum (119881opt)119879

minus 119880119894119882

119894

sum (119881119894)119879

1003817100381710038171003817100381710038171003817100381710038171003817119865

le10038171003817100381710038171003817119883119894minus 119883

opt10038171003817100381710038171003817119865

(43)

Norming both sides of (42) by the triangle inequalities prop-erty of norms we have

10038171003817100381710038171003817119883119894+1minus 119883

opt10038171003817100381710038171003817119865= lim120588rarrinfin

10038171003817100381710038171003817119883119894+1minus 119883

opt10038171003817100381710038171003817119865

le lim120588rarrinfin

(1

120588

10038171003817100381710038171003817120597119891 (119883

119894+1) minus 120597119891 (119883

opt)10038171003817100381710038171003817119865

+10038171003817100381710038171003817119883119894minus 119883

opt10038171003817100381710038171003817119865) = lim120588rarrinfin

1

120588

10038171003817100381710038171003817120597119891 (119883

119894+1)

minus 120597119891 (119883opt)10038171003817100381710038171003817119865+ lim120588rarrinfin

10038171003817100381710038171003817119883119894minus 119883

opt10038171003817100381710038171003817119865=10038171003817100381710038171003817119883119894

minus 119883opt10038171003817100381710038171003817119865

(44)

which implies that the sequence 119883119894 minus 119883opt119865 is monoton-

ically nonincreasing when 120588 rarr infin We then observe that119883119894minus 119883

opt119865 is bounded and hence lim

119894rarrinfin119883119894minus 119883

opt119865

exists denoted by 119883 minus 119883opt119865 where 119883 is the limited point

of 119883119894 Let 119894 rarr infin in (41) by the continuity we get

Mathematical Problems in Engineering 7

0 = 120597119891(119883) + 120588119883 minus 120588119880119882119880119879

119883 which implies that 119883 isin ΥBy setting119883 = 119883opt we have

lim119894rarrinfin

10038171003817100381710038171003817119883119894minus 119883

opt10038171003817100381710038171003817119865 =10038171003817100381710038171003817119883 minus 119883

opt10038171003817100381710038171003817119865 = 0 (45)

which completes the proof

Now we are ready to give the convergence of iterativesequence when solving (23) with general closed convex ΓcapΩ

Theorem 5 Let 120585 be the set of optimal solutions of problem(23) then the iterative sequence 119884119894 generated by stepwiselinear approximative algorithm converges to some 119884119900119901119905 isin 120585when 120588 rarr infin

Proof Generally speaking subproblem (26) can be solved byprojecting a solution obtained via solving a problem withno constraints onto a closed convex set that is 119884119894+1 =

119875ΓcapΩ(119883119894+1) where 119883119894+1 isin argmin119867(119883119883119894) According to

Theorem 4 the sequence 119883119894 minus 119883opt119865 is monotonically

nonincreasing and convergent when 120588 rarr infin together withthewell known fact that119875 is a nonexpansive operator we have

10038171003817100381710038171003817119884119894minus 119884

opt10038171003817100381710038171003817119865 =10038171003817100381710038171003817119875ΓcapΩ

(119883119894) minus 119875ΓcapΩ

(119883opt)10038171003817100381710038171003817119865

le10038171003817100381710038171003817119883119894minus 119883

opt10038171003817100381710038171003817119865 (46)

where119884opt is the projection of119883opt onto ΓcapΩ Clearly119884optisin

120585Hence 0 le lim

119894rarrinfin119884119894minus 119884

opt119865le lim119894rarrinfin

119883119894minus 119883

opt119865=

0 which implies that 119884119894 converge to 119884opt We immediatelyobtain the conclusion

3 Numerical Results

In this section we demonstrate the performance of ourmethod proposed in Section 2 by applying it to solve affinerank minimization problems (image reconstruction andmatrix completion) andmax-cut problemsThe codes imple-mented in this section are written in MATLAB and allexperiments are performed in MATLAB R2013a runningWindows 7

31 Image Reconstruction Problems In this subsection weapply our method to solve one case of affine rank minimiza-tion problems It can be formulated as

min 1

2A (119883) minus 119887

2

119865

st rank (119883) le 119896

119883 isin 119877119898times119899

(47)

where 119883 isin 119877119898times119899 is the decision variable and the linear mapA 119877

119898times119899rarr 119877119901 and vector 119887 isin 119877119901 are known Clearly (47)

is a special case of problem (3) Hence our method can besuitably applied to (47)

Recht et al [39] have demonstrated that when we samplelinearmaps froma class of probability distributions then they

0

50

100

150

200

250

300

SNR

(dB)

800 1000 1200 1400 1600 1800 2000 2200 2400600The number of linear constraints

GaussBinary

Sparse BinaryRandom Projection

Figure 1 Performance curves under four different measurementmatrices SNR versus the number of linear constraints

would obey the Restricted Isometry Property There are twoingredients for a random linear map to be nearly isometricFirst it must be isometric in expectation Second the prob-ability of large distortions of length must be exponentiallysmall In our numerical experiments we use four nearlyisometric measurements The first one is the ensemble withindependent identically distributed (iid) Gaussian entriesThe second one has entries sampled from an iid symmetricBernoulli distribution which we call BernoulliThe third onehas zeros in two-thirds of the entries and others sampledfrom an iid symmetric Bernoulli distribution which wecall Sparse Bernoulli The last one has an orthonormal basiswhich we call Random Projection

Now we conduct numerical experiments to test the per-formance of our method for solving (47) To illustrate thescaling of low rank recovery for a particular matrix 119883consider the MIT logo presented in [39] The image has 46rows and 81 columns (the total number of pixels is 46 times81 = 3726) Since the logo only has 5 distinct rows it hasrank 5 We sample it using Gaussian iid Bernoulli iidSparse Bernoulli iid and Random Projection measure-ments matrix with the number of linear constraints rangingbetween 700 and 2400 We declared MIT logo 119883 to berecovered if 119883 minus 119883opt

119865119883119865le 10minus3 that is SNR ge60 dB

(SNR = minus20 log(119883 minus 119883opt119865119883119865)) Figures 1 and 2 show

the results of our numerical experimentsFigure 1 plots the curves of SNR under four different

measurement matrices mentioned above with different num-bers of linear constraints The numerical values on the 119909-axisrepresent the number of linear constraints 119901 that is numberof measurements while those on 119910-axis represent the Signalto Noise Ration between the true original 119883 and 119883opt recov-ered by our method Noticeably Gauss and Sparse Binarycan successfully reconstruct MIT logo images from about1000 constraints while Random Projection requires about

8 Mathematical Problems in Engineering

800 900 1000 1100 1200 1300 1400 1500700The number of linear constraints

0

20

40

60

80

100

120

140SN

R (d

B)

GaussBinary

Sparse BinaryRandom Projection

Figure 2 Performance curves of nuclear norm heuristic methodunder four different measurementmatrices SNR versus the numberof linear constraints

1200 constraints Binary requires the minimal constraintsabout 950TheGauss and Sparse Binary measurements reachtheir maximal SNR of 220 dB at around 1500 constraintsthe Binary measurements can reach its maximal SNR of180 dB at around 1400 constraints Random Projection hasmore large phase transition point but has the best recoveryerror We observe a sharp transition to perfect recovery near1000 measurements lower than 2119896(119898 + 119899 minus 119896) Comparedwith Figure 2 which shows results of nuclear norm heuristicmethod in [39] and the method that is a kind of convexrelaxations there is a sharp transition to perfect recoverynear 1200 measurements which is approximately equal to2119896(119898 + 119899 minus 119896) The comparison shows our method hasbetter performance that it can recover an image with lessconstraints namely it needs less samples In addition weobserve that SNR of nuclear norm heuristic method achieves120 dB when the number of measurements approximatelyequals 1500 while ourmethod just needs 1200measurementswhich decrease by 20 compared with 1500 If the numberof measurements is 1500 accuracy of our method wouldbe higher Among precisions of four different measurementmatrixes the smallest one is precision of Random ProjectionHowever the precision value is also higher than 120 dB Theperformance illustrates that our method is more accurateObserving the curve of between 1200 and 1500 we can findthat our curve rises smoothly compared with the curve in thesame interval in Figure 2 this shows that ourmethod is stablefor reconstructing image

Figure 3 shows recovered images under four differentmeasurement matrices with 800 900 and 1000 linear con-straints And Figure 4 shows recovered images by ourmethodand that by SeDuMi We chose to use this interior-pointmethod because it yields the highest accuracy Distinctly therecovered images by SeDuMi have some shadow while oursis clear Moreover as far as time is concerned our methodis more fast and the computational time is not extremely

Table 1 DC versus SeDuMi in time (second)

119901 800 900 1000 1100 1200 1500DC 4682 5049 5078 5138 5196 5409SeDuMi 9326 12016 13824 15944 16176 24538

sensitive to119901 compared to SeDuMiThe reason is that closed-form solutions can be obtained when we solve this problemComparisons are shown in Table 1

32 Matrix Completion Problems In this subsection weapply our method to matrix completion problems It can bereformulated as

min 1

2

1003817100381710038171003817119875Ω (119883) minus 119875Ω (119872)1003817100381710038171003817

2

119865

st rank (119883) le 119896

119883 isin 119877119898times119899

(48)

where 119883 and 119872 are both 119898 times 119899 matrices and Ω is a subsetof index pairs (119894 119895) Up to now numerous methods wereproposed to solve the matrix completion problem (see eg[9 10 29 31]) It is easy to see that problem (48) is a specialcase of general rank minimization problem (3) with 119891(119883) =(12)119875

Ω(119883) minus 119875

Ω(119872)2

119865

In our experiments we first created random matrices119872119871isin 119877119898times119896 and 119872

119877isin 119877119898times119896 with iid Gaussian entries

and then set 119872 = 119872119871119872119879

119877 We sampled a subset Ω of 119901

entries uniformly at random We compare against singularvalue thresholding (SVT) method [9] FPC FPCA [10] andthe OptSpace (OPT) method [29] using

1003817100381710038171003817119875Ω (119883) minus 119875Ω (119872)1003817100381710038171003817119865

1003817100381710038171003817119875Ω (119872)1003817100381710038171003817119865

le 10minus3 (49)

as a stop criterion The maximum number of iterations is nomore than 1500 in all methods other parameters are defaultWe report result averaged over 10 runs

We use SR = 119901(119898119899) to denote sampling ratio andFR = 119896(119898 + 119899 minus 119896)119901 to represent the dimension of the setof rank 119896 matrices divided by the number of measurementsNS and AT denote number of matrices that are recoveredsuccessfully and average times (seconds) that are successfullysolved respectively RE represents the average relative error ofthe successful recovered matrices We use the relative error

1003817100381710038171003817119872 minus 119883lowast1003817100381710038171003817119865

119872119865 (50)

to estimate the closeness of119883lowast to119872 where119883lowast is the optimalsolution to (48) produced by our method We declared119872 tobe recovered if the relative error was less than 10minus3

Table 2 shows that our method has the close performancewith OptSpace in column of NS These performances arebetter than those of SVT FPC and FPCA However SVTFPC and FPCA cannot obtain the optimal matrix within1500 iterationswhich shows that ourmethodhas advantage infinding the neighborhood of optimal solutionThe results donot mean that these algorithms are invalid but themaximum

Mathematical Problems in Engineering 9

Table 2 Numerical results for problems with119898 = 119899 = 40 and SR = 05

Alg 119896 FR NS AT RE 119896 FR NS AT RADC

1 00988

10 002 123119864 minus 04

6 05550

9 012 464119864 minus 04

SVT 10 011 159119864 minus 04 0 mdash mdashFPC 9 013 652119864 minus 04 0 mdash mdashFPCA 10 003 398119864 minus 05 0 mdash mdashOptSpace 10 001 131119864 minus 04 10 021 405119864 minus 04

DC

2 01950

10 003 150119864 minus 04

7 06388

9 017 611119864 minus 04

SVT 9 018 183119864 minus 04 0 mdash mdashFPC 8 014 652119864 minus 04 0 mdash mdashFPCA 10 002 583119864 minus 05 0 mdash mdashOptSpace 10 001 151119864 minus 04 9 037 537119864 minus 04

DC

3 02888

10 005 268119864 minus 04

8 07200

7 028 790119864 minus 04

SVT 2 036 247119864 minus 04 0 mdash mdashFPC 5 025 858119864 minus 04 0 mdash mdashFPCA 10 006 138119864 minus 04 0 mdash mdashOptSpace 10 004 175119864 minus 04 9 082 668119864 minus 04

DC

4 03800

10 006 284119864 minus 04

9 07987

4 040 893119864 minus 04

SVT 1 062 138119864 minus 04 0 mdash mdashFPC 1 025 872119864 minus 04 0 mdash mdashFPCA 0 mdash mdash 0 mdash mdashOptSpace 10 005 268119864 minus 04 4 148 928119864 minus 04

DC

5 04688

10 008 382119864 minus 04

SVT 0 mdash mdashFPC 0 mdash mdashFPCA 0 mdash mdashOptSpace 10 009 316119864 minus 04

Table 3 Numerical results for problems with119898 = 119899 = 100 and SR = 02

Alg 119896 FR NS AT RE 119896 FR NS AT RADC

1 00995

10 030 191119864 minus 04

5 04875

10 150 505119864 minus 04

SVT 7 182 204119864 minus 04 0 mdash mdashFPC 10 314 806119864 minus 04 0 mdash mdashFPCA 10 012 173119864 minus 04 0 mdash mdashOptSpace 10 003 171119864 minus 04 10 058 557119864 minus 04

DC

2 01980

10 077 334119864 minus 04

6 05820

6 282 745119864 minus 04

SVT 0 mdash mdash 0 mdash mdashFPC 0 mdash mdash 0 mdash mdashFPCA 8 021 297119864 minus 04 0 mdash mdashOptSpace 10 005 227119864 minus 04 9 097 657119864 minus 04

DC

3 02955

10 064 303119864 minus 04

7 06755

5 324 766119864 minus 04

SVT 0 mdash mdash 0 mdash mdashFPC 0 mdash mdash 0 mdash mdashFPCA 6 009 220119864 minus 04 0 mdash mdashOptSpace 10 012 376119864 minus 04 5 189 795119864 minus 04

DC

4 03920

9 116 440119864 minus 04

SVT 0 mdash mdashFPC 0 mdash mdashFPCA 0 mdash mdashOptSpace 10 022 385119864 minus 04

10 Mathematical Problems in Engineering

Figure 3 Recovered images under four measurement matrices with different numbers of linear constraints From left to right the originalresults under Gauss Binary Sparse Binary and Random Projection measurement From top to bottom 119901 = 800 119901 = 900 and 119901 = 1000

Table 4 Numerical results for problems with119898 = 119899 = 100 and SR = 03

Alg 119896 FR NS AT RE 119896 FR NS AT RADC

1 00663

10 019 149E minus 04

7 04530

10 061 364E minus 04SVT 10 068 166E minus 04 0 mdash mdashFPC 10 133 447E minus 04 0 mdash mdashFPCA 10 017 612E minus 05 0 mdash mdashOptSpace 10 005 129E minus 04 10 047 339E minus 04DC

2 01320

10 020 169E minus 04

8 05120

10 087 470E minus 04SVT 8 139 166E minus 04 0 mdash mdashFPC 10 218 484E minus 04 0 mdash mdashFPCA 10 016 768E minus 05 0 mdash mdashOptSpace 10 004 148E minus 04 10 063 367E minus 04DC

3 01970

10 025 192E minus 04

9 05730

9 103 485E minus 04SVT 4 177 123E minus 04 0 mdash mdashFPC 9 432 582E minus 04 0 mdash mdashFPCA 9 019 121E minus 04 0 mdash mdashOptSpace 10 007 198E minus 04 10 140 487E minus 04DC

4 02613

10 032 230E minus 04

10 06333

8 136 621E minus 04SVT 4 982 467E minus 04 0 mdash mdashFPC 7 502 659E minus 04 0 mdash mdashFPCA 10 018 663E minus 04 0 mdash mdashOptSpace 10 009 208E minus 04 10 180 597E minus 04DC

5 03250

10 037 272E minus 04

11 06930

9 219 729E minus 04SVT 0 mdash mdash 0 mdash mdashFPC 1 818 734E minus 04 0 mdash mdashFPCA 10 010 220E minus 04 9 mdash mdashOptSpace 10 019 267E minus 04 5 411 705E minus 04DC

6 03880

10 047 300E minus 04

12 07520

6 283 879E minus 04SVT 0 mdash mdash 0 mdash mdashFPC 0 mdash mdash 0 mdash mdashFPCA 0 009 220E minus 04 0 mdash mdashOptSpace 10 025 325E minus 04 6 644 847E minus 04

number of iterations is limited to 1500 When the rank ofmatrix is large (ie 119896 ge 5) the average time of our algorithmis less than that of OptSpace In some cases the relative errorsof DC are minimal When the dimension of matrix becomeslargerDCmethodhas a better performance ofNS and similarrelative errors with that of OptSpace In terms of time ourtime is longer than that of OptSpace but shorter than thatof FPC and SVT Tables 3 and 4 show the above results In

conclusion our method has better performance in ldquohardrdquoproblems that is FR gt 034 The problems involved matricesthat are not of very low rank according to [10]

Table 5 gives the numerical results for problemswith largedimensions In these numerical experiments the maximumnumber of iterations is chosen by default of each comparedmethod We observe that our method can recover problemssuccessfully as other four methods while our method is

Mathematical Problems in Engineering 11

Figure 4 Recovered images under different numbers of linear con-straints From left to right the original results under our methodand results by SeDuMi From top to bottom 119901 = 800 119901 = 900 and119901 = 1000

Table 5 Numerical results for problems with different dimension

119898(119899) 119896 119901 SR FR Alg NS AT RA

100 10 5666 057 034

DC 10 014 191E minus 04SVT 10 281 185E minus 04FPC 10 523 183E minus 04FPCA 10 028 494E minus 05

OptSpace 10 09 141E minus 04

200 10 15665 039 025

DC 10 080 165E minus 04SVT 10 293 253E minus 04FPC 10 418 273E minus 04FPCA 10 008 688E minus 05

OptSpace 10 041 172E minus 04

300 10 27160 030 022

DC 10 272 167E minus 04SVT 10 257 179E minus 04FPC 10 98 163E minus 04FPCA 10 039 671E minus 05

OptSpace 10 185 150E minus 04

400 10 41269 026 019

DC 10 573 156E minus 04SVT 10 278 166E minus 04FPC 10 1328 130E minus 04FPCA 10 073 665E minus 05

OptSpace 10 347 146E minus 04

500 10 49471 020 02

DC 10 1363 173E minus 04SVT 10 294 174E minus 04FPC 10 2575 144E minus 04FPCA 10 112 107E minus 04

OptSpace 10 505 160E minus 04

1000 10 119406 012 017

DC 10 35641 170E minus 04SVT 10 456 166E minus 04FPC 10 11053 129E minus 04FPCA 10 516 164E minus 04

OptSpace 10 1461 150E minus 04

slower with the increase of dimension However our methodis faster than FPC Though SVT FPCA and OptSpace out-perform our method in terms of speed they are specifically

developedmethod formatrix completion problemswhile ourmethod can be applied to more class of problems To provethis we will apply our method to max-cut problems

33 Max-Cut Problems In this subsection we apply ourmethod tomax-cut problems It can be formulated as follows

min Tr (119871119883)st diag (119883) = 119890

rank (119883) = 1

119883 ⪰ 0

(51)

where119883 = 119909119909119879 119909 = (1199091 119909

119899)119879 119871 = (14)(diag(119882119890) minus119882)

119890 = (1 1 1)119879 and119882 is the weightmatrix Equation (51) is

one general case of problem (3) with Γ cap Ω = 119883 | diag(119883) =119890119883 isin 119878

+ Hence our method can be suitably applied to (51)

Wenowaddress the initialization and termination criteriafor our method when applied to (51) We set 1198830 = 119890119890119879 inaddition we choose the initial penalty parameter 120588 to be 001and set the parameter 120591 to be 11 And we use 119883119894+1minus119883119894

119865le 120576

as the termination criteria with 120598 = 10minus5In our numerical experiments we first test several small

scale max-cut problems then we perform our method onthe standard ldquo119866rdquo set problems written by Professor Ginaldi[40] In all we consider 10 problems ranging in size from800 variables to 1000 variables The results are shown inTables 6 and 7 in which ldquo119898rdquo and ldquo119896rdquo represent correspondingdimension of graph and the rank of solution respectivelyTheldquoobvrdquo denotes objective values of optimal matrix solutionsoperated by our method

In Table 6 ldquoKVrdquo denotes known optimal objective val-ues and ldquoUBrdquo presents upper bounds for max-cut by thesemidefinite relaxation It is easy to observe that our methodis effective for solving these problems and it has computed theglobal optimal solutions of all problems Moreover we obtainlower rank solution compared with semidefinite relaxationmethod

Table 7 shows the numerical results for standard ldquo119866rdquo setproblems ldquoSUBrdquo denotes upper bounds by spectral bundlemethod proposed by Helmberg and Rendl [41] ldquoGW-cutrdquodenotes the approximative algorithm proposed by Goemansand Williamson [42] ldquoDCrdquo denotes results generated byour method DC+ is the results of DC with neighborhoodapproximation algorithm and the right most column titledBK gives best results as we have known according to [43 44]We use ldquomdashrdquo to indicate that we do not obtain the rank FromTable 3 we can note that our optimal values are close tothe results of latest researches better than values given byGW-cut and they are all less than the corresponding upperbounds For the problems we all get the lower rank solutioncompared with GW-cut moreover most solutions satisfy theconstraints rank = 1 The results prove that our algorithm iseffective when used to solve max-cut problems

4 Concluding Remarks

In this paper we used closed-form of penalty function toreformulate convex constrained rankminimization problems

12 Mathematical Problems in Engineering

Table 6 Numerical results for famous max-cut problems

Graph 119898 KV GW-cut DC Optimal solutionsUB 119896 obv 119896

C5 5 4 4525 5 4 1 (minus1 1 minus1 1 minus1)K5 5 6 625 5 6 1 (minus1 1 minus1 1 minus1)AG 5 928 96040 5 928 1 (minus1 1 1 minus1 minus1)BG 12 88 903919 12 88 1 (minus1 1 1 minus1 minus1 minus1 minus1 minus1 minus1 1 1 1)

Table 7 Numerical results for G problems

Graph 119898 SUB GW-cut DC DC+ BKobv 119896 obv 119896

G01 800 12078 10612 14 11281 1 11518 11624G02 800 12084 10617 31 11261 1 11550 11620G03 800 12077 10610 14 11275 1 11540 11622G11 800 627 551 30 494 2 528 564G14 800 3187 2800 13 2806 1 2980 3063G15 800 3169 2785 70 2767 1 2969 3050G16 800 3172 2782 14 2800 1 2974 3052G43 1000 7027 6174 mdash 6321 1 6519 6660G45 1000 7020 6168 mdash 6348 1 6564 6654G52 1000 4009 3520 mdash 3383 1 3713 3849

as DC programming which could be solved by stepwiselinear approximative method Under some suitable assump-tions we showed that accumulation point of the sequencegenerated by the stepwise linear approximative method was astable point The numerical results on affine rank minimiza-tion and max-cut problems demonstrate that our method isgenerally comparable or superior to some existing methodsin terms of solution quality and it can be applied to a classof problems However the speed of our method needs to beimproved

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

The authors need to acknowledge the fund projectsWanpingYang and Jinkai Zhao are supported by Key Projects of Phi-losophy and Social Sciences Research (Ministry of EducationChina 15JZD012) Soft Science Fund of Shaanxi Province ofChina (2015KRM114) and the Fundamental Research Fundsfor the Central Universities Fengmin Xu is supported bythe Fundamental Research Funds for the Central Universitiesand China NSFC Projects (no 11101325)

References

[1] K-C Toh and S Yun ldquoAn accelerated proximal gradient algo-rithm for nuclear norm regularized linear least squares prob-lemsrdquo Pacific Journal of Optimization vol 6 no 3 pp 615ndash6402010

[2] Y Amit M Fink N Srebro and S Ullman ldquoUncovering sharedstructures in multiclass classificationrdquo in Proceedings of the 24thInternational Conference on Machine Learning (ICML rsquo07) pp17ndash24 ACM Corvallis Ore USA June 2007

[3] L El Ghaoui and P Gahinet ldquoRank minimization under LMIconstraints a framework for output feedback problemsrdquo in Pro-ceedings of the European Control Conference (ECC rsquo93) GahinetP Rank minimization under LMI constraints Groningen TheNetherlands June-July 1993

[4] M Fazel HHindi and S P Boyd ldquoA rankminimization heuris-tic with application to minimum order system approximationrdquoin Proceedings of the IEEE American Control Conference (ACCrsquo01) vol 6 pp 4734ndash4739 Arlington Va USA June 2001

[5] M W Trosset ldquoDistance matrix completion by numericaloptimizationrdquo Computational Optimization and Applicationsvol 17 no 1 pp 11ndash22 2000

[6] B Recht M Fazel and P A Parrilo ldquoGuaranteed minimum-rank solutions of linear matrix equations via nuclear normminimizationrdquo SIAM Review vol 52 no 3 pp 471ndash501 2010

[7] L Wu ldquoFast at-the-money calibration of the libor marketmodel through lagrange multipliersrdquo Journal of ComputationalFinance vol 6 no 2 pp 39ndash77 2003

[8] Z Zhang and L Wu ldquoOptimal low-rank approximation to acorrelationmatrixrdquo Linear Algebra and its Applications vol 364pp 161ndash187 2003

[9] J F Cai E J Candes andZ Shen ldquoA singular value thresholdingalgorithm for matrix completionrdquo SIAM Journal on Optimiza-tion vol 20 no 4 pp 1956ndash1982 2010

[10] S Ma D Goldfarb and L Chen ldquoFixed point and Bregmaniterative methods for matrix rank minimizationrdquoMathematicalProgramming vol 128 no 1-2 pp 321ndash353 2011

[11] K Barman and O Dabeer ldquoAnalysis of a collaborative filterbased on popularity amongst neighborsrdquo IEEE Transactions onInformation Theory vol 58 no 12 pp 7110ndash7134 2012

[12] S T Aditya O Dabeer and B K Dey ldquoA channel codingperspective of collaborative filteringrdquo IEEE Transactions onInformation Theory vol 57 no 4 pp 2327ndash2341 2011

[13] N E Zubov EAMikrinM SMisrikhanovVN RyabchenkoS N Timakov and E A Cheremnykh ldquoIdentification of theposition of an equilibrium attitude of the international spacestation as a problem of stable matrix completionrdquo Journal ofComputer and Systems Sciences International vol 51 no 2 pp291ndash305 2012

[14] T M Guerra and L Vermeiren ldquoLMI-based relaxed non-quadratic stabilization conditions for nonlinear systems in theTakagi-Sugenorsquos formrdquo Automatica vol 40 no 5 pp 823ndash8292004

[15] J J Meng W Yin H Li E Hossain and Z Han ldquoCollaborativespectrum sensing from sparse observations in cognitive radionetworksrdquo IEEE Journal on Selected Areas in Communicationsvol 29 no 2 pp 327ndash337 2011

Mathematical Problems in Engineering 13

[16] Z Q Dong S Anand and R Chandramouli ldquoEstimation ofmissing RTTs in computer networks matrix completion vscompressed sensingrdquo Computer Networks vol 55 no 15 pp3364ndash3375 2011

[17] JWright A Y Yang A Ganesh S S Sastry and YMa ldquoRobustface recognition via sparse representationrdquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 31 no 2 pp210ndash227 2009

[18] H Ji C Liu Z Shen and Y Xu ldquoRobust video denoisingusing Low rank matrix completionrdquo in Proceedings of the IEEEComputer Society Conference on Computer Vision and PatternRecognition (CVPR rsquo10) pp 1791ndash1798 San Francisco CalifUSA June 2010

[19] W U Bajwa J Haupt A M Sayeed and R Nowak ldquoCom-pressed channel sensing a new approach to estimating sparsemultipath channelsrdquo Proceedings of the IEEE vol 98 no 6 pp1058ndash1076 2010

[20] G Taubock F Hlawatsch D Eiwen and H Rauhut ldquoCom-pressive estimation of doubly selective channels in multicarriersystems leakage effects and sparsity-enhancing processingrdquoIEEE Journal on Selected Topics in Signal Processing vol 4 no2 pp 255ndash271 2010

[21] Y Tachwali W J Barnes F Basma and H H Refai ldquoThefeasibility of a fast fourier sampling technique for wirelessmicrophone detection in IEEE 80222 air interfacerdquo in Pro-ceedings of the IEEE Conference on Computer CommunicationsWorkshops (INFOCOM rsquo02) pp 1ndash5 IEEE San Diego CalifUSA March 2010

[22] J Meng W Yin H Li E Houssain and Z Han ldquoCollabora-tive spectrum sensing from sparse observations using matrixcompletion for cognitive radio networksrdquo in Proceedings of theIEEE International Conference on Acoustics Speech and SignalProcessing (ICASSP rsquo10) pp 3114ndash3117 Dallas Tex USA March2010

[23] S Pudlewski and T Melodia ldquoOn the performance of compres-sive video streaming for wireless multimedia sensor networksrdquoin Proceedings of the IEEE International Conference on Commu-nications (ICC rsquo10) pp 1ndash5 IEEECapeTown SouthAfricaMay2010

[24] A H Mohseni M Babaie-Zadeh and C Jutten ldquoInflatingcompressed samples a joint source-channel coding approachfor noise-resistant compressed sensingrdquo in Proceedings of theIEEE International Conference on Acoustics Speech and SignalProcessing (ICASSP rsquo09) pp 2957ndash2960 IEEE Taipei TaiwanApril 2009

[25] S J Benson Y Ye and X Zhang ldquoSolving large-scale sparsesemidefinite programs for combinatorial optimizationrdquo SIAMJournal on Optimization vol 10 no 2 pp 443ndash461 2000

[26] Z Lu Y Zhang and X Li ldquoPenalty decomposition methods forrank minimizationrdquo Optimization Methods and Software vol30 no 3 pp 531ndash558 2015

[27] P Jain R Meka and I S Dhillon ldquoGuaranteed rank mini-mization via singular value projectionrdquo in Advances in NeuralInformation Processing Systems pp 937ndash945 2010

[28] J P Haldar and D Hernando ldquoRank-constrained solutions tolinear matrix equations using powerfactorizationrdquo IEEE SignalProcessing Letters vol 16 no 7 pp 584ndash587 2009

[29] R H Keshavan A Montanari and S Oh ldquoMatrix completionfrom a few entriesrdquo IEEE Transactions on Information Theoryvol 56 no 6 pp 2980ndash2998 2010

[30] YNesterov ANemirovskii andY Ye Interior-Point PolynomialAlgorithms in Convex Programming Society for Industrial andApplied Mathematics Philadelphia Pa USA 1994

[31] M Weimin Matrix completion models with fixed basis coef-ficients and rank regularized problems with hard constraints[PhD thesis] National University of Singapore 2013

[32] M Fazel Matrix rank minimization with applications [PhDthesis] Stanford University 2002

[33] K M Grigoriadis and E B Beran ldquoAlternating projectionalgorithms for linear matrix inequalities problems with rankconstraintsrdquo in Advances in Linear Matrix Inequality Methodsin Control pp 512ndash267 Society for Industrial and AppliedMathematics Philadelphia Pa USA 1999

[34] Y Gao and D Sun ldquoA majorized penalty approach for cali-brating rank constrained correlation matrix problemsrdquo 2010httpwwwmathnusedusgsimmatsundfMajorPenpdf

[35] S Burer R D C Monteiro and Y Zhang ldquoMaximum stable setformulations and heuristics based on continuous optimizationrdquoMathematical Programming vol 94 no 1 pp 137ndash166 2002

[36] R BhatiaMatrix Analysis Springer Berlin Germany 1997[37] Z Lu ldquoSequential convex programming methods for a class

of structured nonlinear programmingrdquo httpsarxivorgabs12103039

[38] R A Horn and C R Johnson Matrix Analysis CambridgeUniversity Press Cambridge UK 1985

[39] B Recht M Fazel and P A Parrilo ldquoGuaranteed minimum-rank solutions of linear matrix equations via nuclear normminimizationrdquo SIAM Review vol 52 no 3 pp 471ndash501 2010

[40] httpwwwstanfordedusimyyyeyyyeGset[41] C Helmberg and F Rendl ldquoA spectral bundle method for

semidefinite programmingrdquo SIAM Journal on Optimizationvol 10 no 3 pp 673ndash696 2000

[42] M X Goemans and D P Williamson ldquoImproved approxima-tion algorithms for maximum cut and satisfiability problemsusing semidefinite programmingrdquo Journal of ACM vol 42 no6 pp 1115ndash1145 1995

[43] G A Kochenberger J-K Hao Z Lu H Wang and F GloverldquoSolving large scale max cut problems via tabu searchrdquo Journalof Heuristics vol 19 no 4 pp 565ndash571 2013

[44] G Palubeckis andVKrivickiene ldquoApplication ofmultistart tabusearch to the max-cut problemrdquo Information Technology andControl vol 31 no 2 pp 29ndash35 2004

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

4 Mathematical Problems in Engineering

Note that 119861119860119879 = (119860119861119879)119879 thus they have the same nonzerosingular values and then 120593(119861119860119879) = 120593(119860119861119879) holds Togetherwith (12) we have

120593 ((120572119860 + (1 minus 120572) 119861) (120572119860 + (1 minus 120572) 119861)119879)

le 1205722120593 (119860119860

119879) + 2120572 (1 minus 120572) 120593 (119860119861

119879)

+ (1 minus 120572)2120593 (119861119861

119879)

(13)

It is known that for arbitrary matrices 119860 119861 isin 119877119898times119898

120590119895(119860lowast119861) le

1

2120590119895(119860lowast119860 + 119861

lowast119861)

1 le 119895 le 119898 (see Theorem IX42 in [36]) (14)

Further

119896

sum

119895=1

120590119895(119860119879119861) le

119896

sum

119895=1

1

2120590119895(119860lowast119860 + 119861

lowast119861) 1 le 119896 le 119899 (15)

That is

120593 (119860119861119879) le

1

2120593 (119860119860

119879+ 119861119861119879) (16)

where119860119860119879 and 119861119861119879 are symmetric matrices It is also knownthat for arbitrary Hermitianmatrices119860119860119879 119861119861119879 isin 119877119898times119898 119896 =1 2 119898 the following holds (see Exercise II115 in [24])

119896

sum

119895=1

120590119895(119860119860119879+ 119861119861119879) le

119896

sum

119895=1

120590119895(119860119860119879) +

119896

sum

119895=1

120590119895(119861119861119879) (17)

Thus

120593 (119860119879119860 + 119861

119879119861) = 120593 (119860119860

119879+ 119861119861119879)

le 120593 (119860119860119879) + 120593 (119861119861

119879)

(18)

Together with (13) and (16) (18) yields

119892 (120572119860 + (1 minus 120572) 119861) le 120572120593 (119860119860119879) + (1 minus 120572) 120593 (119861119861

119879)

= 120572119892 (119860) + (1 minus 120572) 119892 (119861)

(19)

Hence 119892(119883) = sum119896119894=11205902

119894(119883) is a convex function

(2) Let Θ = diag(1205901 1205902 1205903 120590

119903) and 119903 is the rank of119883

119896 le 119903

119878 = (

Θ 119874

119874 119874

) (20)

and we have

Tr (119883119879119880119882119880119879119883) = Tr (119881119878119879119880119879119880119882119880119879119880119878119881119879)

= Tr (119881119878119879119882119878119881119879) = Tr (119878119879119882119878) (21)

Moreover

119878119879

119882119878 = (

Θ 119874

119874 119874)

119879

(

119868119896119874

119874 119874)(

Θ 119874

119874 119874) = (

Θ 119874

119874 119874

) (22)

where Θ = diag (12059021 1205902

2 1205902

3 120590

2

119896)

Thus Tr(119878119879119882119878) = sum119896119894=11205902

119894(119883) = 119892(119883) holds

According to above theorem (10) can be rewritten as

min 119891 (119883) +120588

21198832

119865minus120588

2Tr (119883119879119880119882119880119879119883)

st 119883 isin Γ cap Ω

(23)

Since the objective function of (23) is a difference of convexfunctions the constrained set Γ cap Ω is a closed convex con-straint so (23) isDCprogrammingNext wewill use stepwiselinear approximate method to solve it

22 Stepwise Linear Approximative Algorithm In this subsec-tion we use stepwise linear approximation method [37] tosolve reformulated model (23) Let 119865(119883) represent objectivefunction of model (23) that is

119865 (119883) = 119891 (119883) +120588

21198832

119865minus120588

2Tr (119883119879119880119882119880119879119883) (24)

and the linear approximation function of119865(119883) can be definedas

119867(119883119883119894) = 119891 (119883) +

120588

21198832

119865

minus120588

2119892 (119883

119894) + ⟨120597119892 (119883

119894) 119883 minus 119883

119894⟩

(25)

where (120597119892(119883)120597119883)(120597Tr(119883119879119880119882119880119879119883)120597119883) = 2119880119882119880119879119883 (see522 in [36])

Stepwise linear approximative algorithm for solvingmodel (23) can be described as follows

Given the initial matrix 1198830 isin Γ cap Ω penalty parameter120588 gt 0 rank 119896 gt 0 120591 gt 1 set 119894 = 0

Step 1 Compute the single value decomposition of 119883119894119880119894Σ119894119881119894= 119883119894

Step 2 Update

119883119894+1isin arg min119883isinΓcapΩ

119867 (119883119883119894) (26)

Step 3 If stopping criterion is satisfied stop else set 119894 larr 119894+1

and 120588 larr (1 + 120591)120588 and go to Step 1

Remark

(1) Subproblem (26) can be done by solving a series ofconvex subproblemsThe efficient method for convexprogramming is multiple and it has perfect theoryguarantees

Mathematical Problems in Engineering 5

(a) When Γ cap Ω is 119877119898times119899 closed-form solutions canbe obtained for example when solving model(1) 119883119894+1 = (119860

lowast119860 + 120588119868)

minus1(120588119880119894119882(119880119894)119879119883119894) are

obtained by the first-order optimality condi-tions

0 isin 120597119891 (119883) + 120588119883 minus 120588119880119894119882(119880

119894)119879

119883119894 (27)

(b) When Γ cap Ω is a general convex set the closed-form solution is hard to get but subproblem(26) can be solved by some convex optimizationsoftware programs (such as CVX and CPLEX)According to different closed convex constraintsets we can pick and choose suitable convexprogramming algorithm to combine with ourmethod The toolbox listed in this paper is justan alternative choice

(2) The stop rule that we adopted here is 119883119894+1 minus119883119894119865le 120576

with some 120576 small enough

23 Convergence Analysis In this subsection we analyzethe convergence properties of the sequence 119883119894 generatedby above algorithm Before we prove the main convergenceresult we give the following lemma which is analogous toLemma 1 in [10]

Lemma 2 For any 1198841and 119884

2isin 119877119898times119899

100381710038171003817100381710038171198841minus 1198842

10038171003817100381710038171003817119865le10038171003817100381710038171198841 minus 1198842

1003817100381710038171003817119865 (28)

where 119884119894is the pre-119896 singular values approximate of 119884

119894(119894 =

1 2)

Proof Without loss of generality we assume 119898 le 119899 and thesingular value decomposition 119884

1= 11988011198781119881119879

11198842= 11988021198782119881119879

2

with

1198781= (

diag (120590) 00 0

) isin 119877119898times119899

1198782= (

diag (120574) 00 0

) isin 119877119898times119899

(29)

where 120590 = (1205901 1205902 120590

119904) 1205901ge sdot sdot sdot ge 120590

119904gt 0 and 120574 =

(1205741 1205742 120574

119905) 1205741ge sdot sdot sdot ge 120574

119905gt 0 119884

119894can be written as

119884119894= 119880119894119882119878119894119881119879

119894 (119894 = 1 2) Note that 119880

1 1198811 1198802 1198812are

orthogonal matrices we have

10038171003817100381710038171198841 minus 11988421003817100381710038171003817

2

119865minus100381710038171003817100381710038171198841minus 1198842

10038171003817100381710038171003817

2

119865

= Tr ((1198841minus 1198842)119879

(1198841minus 1198842))

minus Tr ((1198841minus 1198842)119879

(1198841minus 1198842))

= Tr (11988411987911198841minus 1198841

119879

1198841+ 119884119879

21198842minus 1198842

119879

1198842)

minus 2Tr (11988411987911198842minus 1198841

119879

1198842)

=

119904

sum

119894=1

1205902

119894minus

119896

sum

119894=1

1205902

119894+

119905

sum

119894=1

1205742

119894minus

119896

sum

119894=1

1205742

119894

minus 2Tr (11988411987911198842minus 1198841

119879

1198842)

(30)

And we note that

Tr (11988411987911198842minus 1198841

119879

1198842) = Tr ((119884

1minus 1198841)119879

(1198842minus 1198842)

+ (1198841minus 1198841)119879

1198842+ 1198841

119879

(1198842minus 1198842))

= Tr (1198811(1198781minus1198821198781)119879

119880119879

11198802(1198782minus1198821198782) 119881119879

2

+ 1198811(1198781minus1198821198781)119879

119880119879

111988021198821198782119881119879

2

+ 1198811(119882119878)119879119880119879

11198802(1198782minus1198821198782) 119881119879

2)

= Tr ((1198781minus1198821198781)119879

119880 (1198782minus1198821198782) 119881119879

+ (1198781minus1198821198781)119879

1198801198821198782119881119879

+ (1198821198781)119879

119880 (1198782minus1198821198782) 119881119879)

(31)

where 119880 = 11988011987911198802 119881 = 119881

119879

11198812are also orthogonal matrices

Next we estimate an upper bound for Tr(11988411987911198842minus 1198841

119879

1198842)

According to Theorem 749 in [38] we get that an orthog-onal matrix 119880 is a maximizing matrix for the problemmaxTr(119860119880)119880 is orthogonal if and only if 119860119880 is positivesemidefinite matrix Thus Tr((119878

1minus 119882119878

1)119879119880(1198782minus 119882119878

2)119881119879)

Tr((1198781minus 119882119878

1)1198791198801198821198782119881119879) and Tr((119882119878)119879119880(119878

2minus 119882119878

2)119881119879)

achieve their maximum if and only if (1198781minus 119882119878

1)119879119880(1198782minus

1198821198782)119881119879 (1198781minus 1198821198781)1198791198801198821198782119881119879 and (119882119878

1)119879119880(1198782minus 1198821198782)119881119879

are all positive semidefinite It is known that when 119860119861 ispositive semidefinite

Tr (119860119861) = sum119894

120590119894(119860119861) le sum

119894

120590119894(119860) 120590119894(119861) (32)

Applying (32) to above three terms we have

Tr ((1198781minus1198821198781)119879

119880 (1198782minus1198821198782) 119881119879)

le sum

119894

120590119894(1198781minus1198821198781) 120590119894(1198782minus1198821198782)

Tr ((1198781minus1198821198781)119879

1198801198821198782119881119879)

le sum

119894

120590119894(1198781minus1198821198781) 120590119894(1198821198782)

Tr ((1198821198781)119879

119880(1198782minus1198821198782) 119881119879)

le sum

119894

120590119894(1198821198781) 120590119894(1198782minus1198821198782)

(33)

6 Mathematical Problems in Engineering

Hence without loss of generality assuming 119896 le 119904 le 119905 we have

10038171003817100381710038171198841 minus 11988421003817100381710038171003817

2

119865minus100381710038171003817100381710038171198841minus 1198842

10038171003817100381710038171003817

2

119865

ge

119904

sum

119894=1

1205902

119894minus

119896

sum

119894=1

1205902

119894+

119905

sum

119894=1

1205742

119894minus

119896

sum

119894=1

1205742

119894minus 2(

119904

sum

119894=119896+1

120590119894120574119894)

=

119904

sum

119894=119896+1

(120590119894minus 120574119894)2

+

119905

sum

119894=119904+1

1205742

119894ge 0

(34)

which implies that 1198841minus 1198842119865le 1198841minus 1198842119865holds

Next we claim the value of iterations of stepwise linearapproximative algorithm is nonincreasing

Theorem 3 Let 119883119894 be the iterative sequence generated bystepwise linear approximative algorithmThen 119865(119883119894) is mon-otonically nonincreasing

Proof Since119883119894+1 isin argmin119867(119883119883119894) | 119883 isin Γ cap Ω we have

119867(119883119894+1 119883119894) le 119867(119883

119894 119883119894)

= 119891 (119883119894) +

120588

2

1003817100381710038171003817100381711988311989410038171003817100381710038171003817

2

119865minus120588

2119892 (119883119894)

= 119865 (119883119894)

(35)

and 120597119892(119883) is the subdifferential of 119892(119883) which implies

forall119883119884 119892 (119883) ge 119892 (119884) + ⟨120597119892 (119884) 119883 minus 119884⟩ (36)

Let119883 = 119883119894+1 119884 = 119883119894 in above inequality we have

119892 (119883119894+1) ge 119892 (119883

119894) + ⟨120597119892 (119883

119894) 119883119894+1minus 119883119894⟩ (37)

Therefore

119867(119883119894+1 119883119894) = 119891 (119883

119894+1) +

120588

2

10038171003817100381710038171003817119883119894+110038171003817100381710038171003817

2

119865

minus120588

2119892 (119883

119894) + ⟨120597119892 (119883

119894) 119883 minus 119883

119894⟩

ge 119891 (119883119894+1) +

120588

2

10038171003817100381710038171003817119883119894+110038171003817100381710038171003817

2

119865minus120588

2119892 (119883119894+1)

= 119865 (119883119894+1)

(38)

Consequently

119865 (119883119894+1) le 119867(119883

119894+1 119883119894) le 119867(119883

119894 119883119894) = 119865 (119883

119894) (39)

We immediately obtain the conclusion as desired

Next we show the convergence of iterative sequencegenerated by stepwise linear approximative algorithm whensolving (23)Theorem 4 shows the convergence when solving(23) with Γ cap Ω = 119877119898times119899

Theorem 4 Let Υ be the set of stable points of problem (23)with no constraints then the iterative sequence 119883119894 generatedby linear approximative algorithm converges to some119883119900119901119905 isin Υwhen 120588 rarr infin

Proof Assuming that 119883optisin Υ is any optimal solution of

problem (23) with Γ cap Ω = 119877119898times119899 according to the first-orderoptimality conditions we have

0 = 120597119891 (119883opt) + 120588119883

optminus 120588119880

opt119882(119880

opt)119879

119883opt (40)

Since 119883119894+1 isin argmin119891(119883) + (1205882)1198832119865minus (1205882)119892(119883

119894) +

⟨120597119892(119883119894) 119883 minus 119883

119894⟩ then

0 = 120597119891 (119883119894+1) + 120588119883

119894+1minus 120588119880119894119882(119880

119894)119879

119883119894 (41)

Equation (41) subtracts (40)

119883119894+1minus 119883

opt=1

120588(120597119891 (119883

119894+1) minus 120597119891 (119883

opt))

+ 119880opt119882(119880

opt)119879

119883opt

minus 119880119894119882(119880

119894)119879

119883119894

(42)

in which 119880opt119882(119880

opt)119879119883

optminus 119880

119894119882(119880119894)119879119883119894

=

119880opt119882sum

opt(119881

opt)119879minus 119880119894119882sum119894(119881

opt)119879

By Lemma 2 we have

100381710038171003817100381710038171003817119880

opt119882(119880

opt)119879

119883optminus 119880119894119882(119880

119894)119879

119883119894100381710038171003817100381710038171003817119865

=

1003817100381710038171003817100381710038171003817100381710038171003817

119880opt119882

opt

sum (119881opt)119879

minus 119880119894119882

119894

sum (119881119894)119879

1003817100381710038171003817100381710038171003817100381710038171003817119865

le10038171003817100381710038171003817119883119894minus 119883

opt10038171003817100381710038171003817119865

(43)

Norming both sides of (42) by the triangle inequalities prop-erty of norms we have

10038171003817100381710038171003817119883119894+1minus 119883

opt10038171003817100381710038171003817119865= lim120588rarrinfin

10038171003817100381710038171003817119883119894+1minus 119883

opt10038171003817100381710038171003817119865

le lim120588rarrinfin

(1

120588

10038171003817100381710038171003817120597119891 (119883

119894+1) minus 120597119891 (119883

opt)10038171003817100381710038171003817119865

+10038171003817100381710038171003817119883119894minus 119883

opt10038171003817100381710038171003817119865) = lim120588rarrinfin

1

120588

10038171003817100381710038171003817120597119891 (119883

119894+1)

minus 120597119891 (119883opt)10038171003817100381710038171003817119865+ lim120588rarrinfin

10038171003817100381710038171003817119883119894minus 119883

opt10038171003817100381710038171003817119865=10038171003817100381710038171003817119883119894

minus 119883opt10038171003817100381710038171003817119865

(44)

which implies that the sequence 119883119894 minus 119883opt119865 is monoton-

ically nonincreasing when 120588 rarr infin We then observe that119883119894minus 119883

opt119865 is bounded and hence lim

119894rarrinfin119883119894minus 119883

opt119865

exists denoted by 119883 minus 119883opt119865 where 119883 is the limited point

of 119883119894 Let 119894 rarr infin in (41) by the continuity we get

Mathematical Problems in Engineering 7

0 = 120597119891(119883) + 120588119883 minus 120588119880119882119880119879

119883 which implies that 119883 isin ΥBy setting119883 = 119883opt we have

lim119894rarrinfin

10038171003817100381710038171003817119883119894minus 119883

opt10038171003817100381710038171003817119865 =10038171003817100381710038171003817119883 minus 119883

opt10038171003817100381710038171003817119865 = 0 (45)

which completes the proof

Now we are ready to give the convergence of iterativesequence when solving (23) with general closed convex ΓcapΩ

Theorem 5 Let 120585 be the set of optimal solutions of problem(23) then the iterative sequence 119884119894 generated by stepwiselinear approximative algorithm converges to some 119884119900119901119905 isin 120585when 120588 rarr infin

Proof Generally speaking subproblem (26) can be solved byprojecting a solution obtained via solving a problem withno constraints onto a closed convex set that is 119884119894+1 =

119875ΓcapΩ(119883119894+1) where 119883119894+1 isin argmin119867(119883119883119894) According to

Theorem 4 the sequence 119883119894 minus 119883opt119865 is monotonically

nonincreasing and convergent when 120588 rarr infin together withthewell known fact that119875 is a nonexpansive operator we have

10038171003817100381710038171003817119884119894minus 119884

opt10038171003817100381710038171003817119865 =10038171003817100381710038171003817119875ΓcapΩ

(119883119894) minus 119875ΓcapΩ

(119883opt)10038171003817100381710038171003817119865

le10038171003817100381710038171003817119883119894minus 119883

opt10038171003817100381710038171003817119865 (46)

where119884opt is the projection of119883opt onto ΓcapΩ Clearly119884optisin

120585Hence 0 le lim

119894rarrinfin119884119894minus 119884

opt119865le lim119894rarrinfin

119883119894minus 119883

opt119865=

0 which implies that 119884119894 converge to 119884opt We immediatelyobtain the conclusion

3 Numerical Results

In this section we demonstrate the performance of ourmethod proposed in Section 2 by applying it to solve affinerank minimization problems (image reconstruction andmatrix completion) andmax-cut problemsThe codes imple-mented in this section are written in MATLAB and allexperiments are performed in MATLAB R2013a runningWindows 7

31 Image Reconstruction Problems In this subsection weapply our method to solve one case of affine rank minimiza-tion problems It can be formulated as

min 1

2A (119883) minus 119887

2

119865

st rank (119883) le 119896

119883 isin 119877119898times119899

(47)

where 119883 isin 119877119898times119899 is the decision variable and the linear mapA 119877

119898times119899rarr 119877119901 and vector 119887 isin 119877119901 are known Clearly (47)

is a special case of problem (3) Hence our method can besuitably applied to (47)

Recht et al [39] have demonstrated that when we samplelinearmaps froma class of probability distributions then they

0

50

100

150

200

250

300

SNR

(dB)

800 1000 1200 1400 1600 1800 2000 2200 2400600The number of linear constraints

GaussBinary

Sparse BinaryRandom Projection

Figure 1 Performance curves under four different measurementmatrices SNR versus the number of linear constraints

would obey the Restricted Isometry Property There are twoingredients for a random linear map to be nearly isometricFirst it must be isometric in expectation Second the prob-ability of large distortions of length must be exponentiallysmall In our numerical experiments we use four nearlyisometric measurements The first one is the ensemble withindependent identically distributed (iid) Gaussian entriesThe second one has entries sampled from an iid symmetricBernoulli distribution which we call BernoulliThe third onehas zeros in two-thirds of the entries and others sampledfrom an iid symmetric Bernoulli distribution which wecall Sparse Bernoulli The last one has an orthonormal basiswhich we call Random Projection

Now we conduct numerical experiments to test the per-formance of our method for solving (47) To illustrate thescaling of low rank recovery for a particular matrix 119883consider the MIT logo presented in [39] The image has 46rows and 81 columns (the total number of pixels is 46 times81 = 3726) Since the logo only has 5 distinct rows it hasrank 5 We sample it using Gaussian iid Bernoulli iidSparse Bernoulli iid and Random Projection measure-ments matrix with the number of linear constraints rangingbetween 700 and 2400 We declared MIT logo 119883 to berecovered if 119883 minus 119883opt

119865119883119865le 10minus3 that is SNR ge60 dB

(SNR = minus20 log(119883 minus 119883opt119865119883119865)) Figures 1 and 2 show

the results of our numerical experimentsFigure 1 plots the curves of SNR under four different

measurement matrices mentioned above with different num-bers of linear constraints The numerical values on the 119909-axisrepresent the number of linear constraints 119901 that is numberof measurements while those on 119910-axis represent the Signalto Noise Ration between the true original 119883 and 119883opt recov-ered by our method Noticeably Gauss and Sparse Binarycan successfully reconstruct MIT logo images from about1000 constraints while Random Projection requires about

8 Mathematical Problems in Engineering

800 900 1000 1100 1200 1300 1400 1500700The number of linear constraints

0

20

40

60

80

100

120

140SN

R (d

B)

GaussBinary

Sparse BinaryRandom Projection

Figure 2 Performance curves of nuclear norm heuristic methodunder four different measurementmatrices SNR versus the numberof linear constraints

1200 constraints Binary requires the minimal constraintsabout 950TheGauss and Sparse Binary measurements reachtheir maximal SNR of 220 dB at around 1500 constraintsthe Binary measurements can reach its maximal SNR of180 dB at around 1400 constraints Random Projection hasmore large phase transition point but has the best recoveryerror We observe a sharp transition to perfect recovery near1000 measurements lower than 2119896(119898 + 119899 minus 119896) Comparedwith Figure 2 which shows results of nuclear norm heuristicmethod in [39] and the method that is a kind of convexrelaxations there is a sharp transition to perfect recoverynear 1200 measurements which is approximately equal to2119896(119898 + 119899 minus 119896) The comparison shows our method hasbetter performance that it can recover an image with lessconstraints namely it needs less samples In addition weobserve that SNR of nuclear norm heuristic method achieves120 dB when the number of measurements approximatelyequals 1500 while ourmethod just needs 1200measurementswhich decrease by 20 compared with 1500 If the numberof measurements is 1500 accuracy of our method wouldbe higher Among precisions of four different measurementmatrixes the smallest one is precision of Random ProjectionHowever the precision value is also higher than 120 dB Theperformance illustrates that our method is more accurateObserving the curve of between 1200 and 1500 we can findthat our curve rises smoothly compared with the curve in thesame interval in Figure 2 this shows that ourmethod is stablefor reconstructing image

Figure 3 shows recovered images under four differentmeasurement matrices with 800 900 and 1000 linear con-straints And Figure 4 shows recovered images by ourmethodand that by SeDuMi We chose to use this interior-pointmethod because it yields the highest accuracy Distinctly therecovered images by SeDuMi have some shadow while oursis clear Moreover as far as time is concerned our methodis more fast and the computational time is not extremely

Table 1 DC versus SeDuMi in time (second)

119901 800 900 1000 1100 1200 1500DC 4682 5049 5078 5138 5196 5409SeDuMi 9326 12016 13824 15944 16176 24538

sensitive to119901 compared to SeDuMiThe reason is that closed-form solutions can be obtained when we solve this problemComparisons are shown in Table 1

32 Matrix Completion Problems In this subsection weapply our method to matrix completion problems It can bereformulated as

min 1

2

1003817100381710038171003817119875Ω (119883) minus 119875Ω (119872)1003817100381710038171003817

2

119865

st rank (119883) le 119896

119883 isin 119877119898times119899

(48)

where 119883 and 119872 are both 119898 times 119899 matrices and Ω is a subsetof index pairs (119894 119895) Up to now numerous methods wereproposed to solve the matrix completion problem (see eg[9 10 29 31]) It is easy to see that problem (48) is a specialcase of general rank minimization problem (3) with 119891(119883) =(12)119875

Ω(119883) minus 119875

Ω(119872)2

119865

In our experiments we first created random matrices119872119871isin 119877119898times119896 and 119872

119877isin 119877119898times119896 with iid Gaussian entries

and then set 119872 = 119872119871119872119879

119877 We sampled a subset Ω of 119901

entries uniformly at random We compare against singularvalue thresholding (SVT) method [9] FPC FPCA [10] andthe OptSpace (OPT) method [29] using

1003817100381710038171003817119875Ω (119883) minus 119875Ω (119872)1003817100381710038171003817119865

1003817100381710038171003817119875Ω (119872)1003817100381710038171003817119865

le 10minus3 (49)

as a stop criterion The maximum number of iterations is nomore than 1500 in all methods other parameters are defaultWe report result averaged over 10 runs

We use SR = 119901(119898119899) to denote sampling ratio andFR = 119896(119898 + 119899 minus 119896)119901 to represent the dimension of the setof rank 119896 matrices divided by the number of measurementsNS and AT denote number of matrices that are recoveredsuccessfully and average times (seconds) that are successfullysolved respectively RE represents the average relative error ofthe successful recovered matrices We use the relative error

1003817100381710038171003817119872 minus 119883lowast1003817100381710038171003817119865

119872119865 (50)

to estimate the closeness of119883lowast to119872 where119883lowast is the optimalsolution to (48) produced by our method We declared119872 tobe recovered if the relative error was less than 10minus3

Table 2 shows that our method has the close performancewith OptSpace in column of NS These performances arebetter than those of SVT FPC and FPCA However SVTFPC and FPCA cannot obtain the optimal matrix within1500 iterationswhich shows that ourmethodhas advantage infinding the neighborhood of optimal solutionThe results donot mean that these algorithms are invalid but themaximum

Mathematical Problems in Engineering 9

Table 2 Numerical results for problems with119898 = 119899 = 40 and SR = 05

Alg 119896 FR NS AT RE 119896 FR NS AT RADC

1 00988

10 002 123119864 minus 04

6 05550

9 012 464119864 minus 04

SVT 10 011 159119864 minus 04 0 mdash mdashFPC 9 013 652119864 minus 04 0 mdash mdashFPCA 10 003 398119864 minus 05 0 mdash mdashOptSpace 10 001 131119864 minus 04 10 021 405119864 minus 04

DC

2 01950

10 003 150119864 minus 04

7 06388

9 017 611119864 minus 04

SVT 9 018 183119864 minus 04 0 mdash mdashFPC 8 014 652119864 minus 04 0 mdash mdashFPCA 10 002 583119864 minus 05 0 mdash mdashOptSpace 10 001 151119864 minus 04 9 037 537119864 minus 04

DC

3 02888

10 005 268119864 minus 04

8 07200

7 028 790119864 minus 04

SVT 2 036 247119864 minus 04 0 mdash mdashFPC 5 025 858119864 minus 04 0 mdash mdashFPCA 10 006 138119864 minus 04 0 mdash mdashOptSpace 10 004 175119864 minus 04 9 082 668119864 minus 04

DC

4 03800

10 006 284119864 minus 04

9 07987

4 040 893119864 minus 04

SVT 1 062 138119864 minus 04 0 mdash mdashFPC 1 025 872119864 minus 04 0 mdash mdashFPCA 0 mdash mdash 0 mdash mdashOptSpace 10 005 268119864 minus 04 4 148 928119864 minus 04

DC

5 04688

10 008 382119864 minus 04

SVT 0 mdash mdashFPC 0 mdash mdashFPCA 0 mdash mdashOptSpace 10 009 316119864 minus 04

Table 3 Numerical results for problems with119898 = 119899 = 100 and SR = 02

Alg 119896 FR NS AT RE 119896 FR NS AT RADC

1 00995

10 030 191119864 minus 04

5 04875

10 150 505119864 minus 04

SVT 7 182 204119864 minus 04 0 mdash mdashFPC 10 314 806119864 minus 04 0 mdash mdashFPCA 10 012 173119864 minus 04 0 mdash mdashOptSpace 10 003 171119864 minus 04 10 058 557119864 minus 04

DC

2 01980

10 077 334119864 minus 04

6 05820

6 282 745119864 minus 04

SVT 0 mdash mdash 0 mdash mdashFPC 0 mdash mdash 0 mdash mdashFPCA 8 021 297119864 minus 04 0 mdash mdashOptSpace 10 005 227119864 minus 04 9 097 657119864 minus 04

DC

3 02955

10 064 303119864 minus 04

7 06755

5 324 766119864 minus 04

SVT 0 mdash mdash 0 mdash mdashFPC 0 mdash mdash 0 mdash mdashFPCA 6 009 220119864 minus 04 0 mdash mdashOptSpace 10 012 376119864 minus 04 5 189 795119864 minus 04

DC

4 03920

9 116 440119864 minus 04

SVT 0 mdash mdashFPC 0 mdash mdashFPCA 0 mdash mdashOptSpace 10 022 385119864 minus 04

10 Mathematical Problems in Engineering

Figure 3 Recovered images under four measurement matrices with different numbers of linear constraints From left to right the originalresults under Gauss Binary Sparse Binary and Random Projection measurement From top to bottom 119901 = 800 119901 = 900 and 119901 = 1000

Table 4 Numerical results for problems with119898 = 119899 = 100 and SR = 03

Alg 119896 FR NS AT RE 119896 FR NS AT RADC

1 00663

10 019 149E minus 04

7 04530

10 061 364E minus 04SVT 10 068 166E minus 04 0 mdash mdashFPC 10 133 447E minus 04 0 mdash mdashFPCA 10 017 612E minus 05 0 mdash mdashOptSpace 10 005 129E minus 04 10 047 339E minus 04DC

2 01320

10 020 169E minus 04

8 05120

10 087 470E minus 04SVT 8 139 166E minus 04 0 mdash mdashFPC 10 218 484E minus 04 0 mdash mdashFPCA 10 016 768E minus 05 0 mdash mdashOptSpace 10 004 148E minus 04 10 063 367E minus 04DC

3 01970

10 025 192E minus 04

9 05730

9 103 485E minus 04SVT 4 177 123E minus 04 0 mdash mdashFPC 9 432 582E minus 04 0 mdash mdashFPCA 9 019 121E minus 04 0 mdash mdashOptSpace 10 007 198E minus 04 10 140 487E minus 04DC

4 02613

10 032 230E minus 04

10 06333

8 136 621E minus 04SVT 4 982 467E minus 04 0 mdash mdashFPC 7 502 659E minus 04 0 mdash mdashFPCA 10 018 663E minus 04 0 mdash mdashOptSpace 10 009 208E minus 04 10 180 597E minus 04DC

5 03250

10 037 272E minus 04

11 06930

9 219 729E minus 04SVT 0 mdash mdash 0 mdash mdashFPC 1 818 734E minus 04 0 mdash mdashFPCA 10 010 220E minus 04 9 mdash mdashOptSpace 10 019 267E minus 04 5 411 705E minus 04DC

6 03880

10 047 300E minus 04

12 07520

6 283 879E minus 04SVT 0 mdash mdash 0 mdash mdashFPC 0 mdash mdash 0 mdash mdashFPCA 0 009 220E minus 04 0 mdash mdashOptSpace 10 025 325E minus 04 6 644 847E minus 04

number of iterations is limited to 1500 When the rank ofmatrix is large (ie 119896 ge 5) the average time of our algorithmis less than that of OptSpace In some cases the relative errorsof DC are minimal When the dimension of matrix becomeslargerDCmethodhas a better performance ofNS and similarrelative errors with that of OptSpace In terms of time ourtime is longer than that of OptSpace but shorter than thatof FPC and SVT Tables 3 and 4 show the above results In

conclusion our method has better performance in ldquohardrdquoproblems that is FR gt 034 The problems involved matricesthat are not of very low rank according to [10]

Table 5 gives the numerical results for problemswith largedimensions In these numerical experiments the maximumnumber of iterations is chosen by default of each comparedmethod We observe that our method can recover problemssuccessfully as other four methods while our method is

Mathematical Problems in Engineering 11

Figure 4 Recovered images under different numbers of linear con-straints From left to right the original results under our methodand results by SeDuMi From top to bottom 119901 = 800 119901 = 900 and119901 = 1000

Table 5 Numerical results for problems with different dimension

119898(119899) 119896 119901 SR FR Alg NS AT RA

100 10 5666 057 034

DC 10 014 191E minus 04SVT 10 281 185E minus 04FPC 10 523 183E minus 04FPCA 10 028 494E minus 05

OptSpace 10 09 141E minus 04

200 10 15665 039 025

DC 10 080 165E minus 04SVT 10 293 253E minus 04FPC 10 418 273E minus 04FPCA 10 008 688E minus 05

OptSpace 10 041 172E minus 04

300 10 27160 030 022

DC 10 272 167E minus 04SVT 10 257 179E minus 04FPC 10 98 163E minus 04FPCA 10 039 671E minus 05

OptSpace 10 185 150E minus 04

400 10 41269 026 019

DC 10 573 156E minus 04SVT 10 278 166E minus 04FPC 10 1328 130E minus 04FPCA 10 073 665E minus 05

OptSpace 10 347 146E minus 04

500 10 49471 020 02

DC 10 1363 173E minus 04SVT 10 294 174E minus 04FPC 10 2575 144E minus 04FPCA 10 112 107E minus 04

OptSpace 10 505 160E minus 04

1000 10 119406 012 017

DC 10 35641 170E minus 04SVT 10 456 166E minus 04FPC 10 11053 129E minus 04FPCA 10 516 164E minus 04

OptSpace 10 1461 150E minus 04

slower with the increase of dimension However our methodis faster than FPC Though SVT FPCA and OptSpace out-perform our method in terms of speed they are specifically

developedmethod formatrix completion problemswhile ourmethod can be applied to more class of problems To provethis we will apply our method to max-cut problems

33 Max-Cut Problems In this subsection we apply ourmethod tomax-cut problems It can be formulated as follows

min Tr (119871119883)st diag (119883) = 119890

rank (119883) = 1

119883 ⪰ 0

(51)

where119883 = 119909119909119879 119909 = (1199091 119909

119899)119879 119871 = (14)(diag(119882119890) minus119882)

119890 = (1 1 1)119879 and119882 is the weightmatrix Equation (51) is

one general case of problem (3) with Γ cap Ω = 119883 | diag(119883) =119890119883 isin 119878

+ Hence our method can be suitably applied to (51)

Wenowaddress the initialization and termination criteriafor our method when applied to (51) We set 1198830 = 119890119890119879 inaddition we choose the initial penalty parameter 120588 to be 001and set the parameter 120591 to be 11 And we use 119883119894+1minus119883119894

119865le 120576

as the termination criteria with 120598 = 10minus5In our numerical experiments we first test several small

scale max-cut problems then we perform our method onthe standard ldquo119866rdquo set problems written by Professor Ginaldi[40] In all we consider 10 problems ranging in size from800 variables to 1000 variables The results are shown inTables 6 and 7 in which ldquo119898rdquo and ldquo119896rdquo represent correspondingdimension of graph and the rank of solution respectivelyTheldquoobvrdquo denotes objective values of optimal matrix solutionsoperated by our method

In Table 6 ldquoKVrdquo denotes known optimal objective val-ues and ldquoUBrdquo presents upper bounds for max-cut by thesemidefinite relaxation It is easy to observe that our methodis effective for solving these problems and it has computed theglobal optimal solutions of all problems Moreover we obtainlower rank solution compared with semidefinite relaxationmethod

Table 7 shows the numerical results for standard ldquo119866rdquo setproblems ldquoSUBrdquo denotes upper bounds by spectral bundlemethod proposed by Helmberg and Rendl [41] ldquoGW-cutrdquodenotes the approximative algorithm proposed by Goemansand Williamson [42] ldquoDCrdquo denotes results generated byour method DC+ is the results of DC with neighborhoodapproximation algorithm and the right most column titledBK gives best results as we have known according to [43 44]We use ldquomdashrdquo to indicate that we do not obtain the rank FromTable 3 we can note that our optimal values are close tothe results of latest researches better than values given byGW-cut and they are all less than the corresponding upperbounds For the problems we all get the lower rank solutioncompared with GW-cut moreover most solutions satisfy theconstraints rank = 1 The results prove that our algorithm iseffective when used to solve max-cut problems

4 Concluding Remarks

In this paper we used closed-form of penalty function toreformulate convex constrained rankminimization problems

12 Mathematical Problems in Engineering

Table 6 Numerical results for famous max-cut problems

Graph 119898 KV GW-cut DC Optimal solutionsUB 119896 obv 119896

C5 5 4 4525 5 4 1 (minus1 1 minus1 1 minus1)K5 5 6 625 5 6 1 (minus1 1 minus1 1 minus1)AG 5 928 96040 5 928 1 (minus1 1 1 minus1 minus1)BG 12 88 903919 12 88 1 (minus1 1 1 minus1 minus1 minus1 minus1 minus1 minus1 1 1 1)

Table 7 Numerical results for G problems

Graph 119898 SUB GW-cut DC DC+ BKobv 119896 obv 119896

G01 800 12078 10612 14 11281 1 11518 11624G02 800 12084 10617 31 11261 1 11550 11620G03 800 12077 10610 14 11275 1 11540 11622G11 800 627 551 30 494 2 528 564G14 800 3187 2800 13 2806 1 2980 3063G15 800 3169 2785 70 2767 1 2969 3050G16 800 3172 2782 14 2800 1 2974 3052G43 1000 7027 6174 mdash 6321 1 6519 6660G45 1000 7020 6168 mdash 6348 1 6564 6654G52 1000 4009 3520 mdash 3383 1 3713 3849

as DC programming which could be solved by stepwiselinear approximative method Under some suitable assump-tions we showed that accumulation point of the sequencegenerated by the stepwise linear approximative method was astable point The numerical results on affine rank minimiza-tion and max-cut problems demonstrate that our method isgenerally comparable or superior to some existing methodsin terms of solution quality and it can be applied to a classof problems However the speed of our method needs to beimproved

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

The authors need to acknowledge the fund projectsWanpingYang and Jinkai Zhao are supported by Key Projects of Phi-losophy and Social Sciences Research (Ministry of EducationChina 15JZD012) Soft Science Fund of Shaanxi Province ofChina (2015KRM114) and the Fundamental Research Fundsfor the Central Universities Fengmin Xu is supported bythe Fundamental Research Funds for the Central Universitiesand China NSFC Projects (no 11101325)

References

[1] K-C Toh and S Yun ldquoAn accelerated proximal gradient algo-rithm for nuclear norm regularized linear least squares prob-lemsrdquo Pacific Journal of Optimization vol 6 no 3 pp 615ndash6402010

[2] Y Amit M Fink N Srebro and S Ullman ldquoUncovering sharedstructures in multiclass classificationrdquo in Proceedings of the 24thInternational Conference on Machine Learning (ICML rsquo07) pp17ndash24 ACM Corvallis Ore USA June 2007

[3] L El Ghaoui and P Gahinet ldquoRank minimization under LMIconstraints a framework for output feedback problemsrdquo in Pro-ceedings of the European Control Conference (ECC rsquo93) GahinetP Rank minimization under LMI constraints Groningen TheNetherlands June-July 1993

[4] M Fazel HHindi and S P Boyd ldquoA rankminimization heuris-tic with application to minimum order system approximationrdquoin Proceedings of the IEEE American Control Conference (ACCrsquo01) vol 6 pp 4734ndash4739 Arlington Va USA June 2001

[5] M W Trosset ldquoDistance matrix completion by numericaloptimizationrdquo Computational Optimization and Applicationsvol 17 no 1 pp 11ndash22 2000

[6] B Recht M Fazel and P A Parrilo ldquoGuaranteed minimum-rank solutions of linear matrix equations via nuclear normminimizationrdquo SIAM Review vol 52 no 3 pp 471ndash501 2010

[7] L Wu ldquoFast at-the-money calibration of the libor marketmodel through lagrange multipliersrdquo Journal of ComputationalFinance vol 6 no 2 pp 39ndash77 2003

[8] Z Zhang and L Wu ldquoOptimal low-rank approximation to acorrelationmatrixrdquo Linear Algebra and its Applications vol 364pp 161ndash187 2003

[9] J F Cai E J Candes andZ Shen ldquoA singular value thresholdingalgorithm for matrix completionrdquo SIAM Journal on Optimiza-tion vol 20 no 4 pp 1956ndash1982 2010

[10] S Ma D Goldfarb and L Chen ldquoFixed point and Bregmaniterative methods for matrix rank minimizationrdquoMathematicalProgramming vol 128 no 1-2 pp 321ndash353 2011

[11] K Barman and O Dabeer ldquoAnalysis of a collaborative filterbased on popularity amongst neighborsrdquo IEEE Transactions onInformation Theory vol 58 no 12 pp 7110ndash7134 2012

[12] S T Aditya O Dabeer and B K Dey ldquoA channel codingperspective of collaborative filteringrdquo IEEE Transactions onInformation Theory vol 57 no 4 pp 2327ndash2341 2011

[13] N E Zubov EAMikrinM SMisrikhanovVN RyabchenkoS N Timakov and E A Cheremnykh ldquoIdentification of theposition of an equilibrium attitude of the international spacestation as a problem of stable matrix completionrdquo Journal ofComputer and Systems Sciences International vol 51 no 2 pp291ndash305 2012

[14] T M Guerra and L Vermeiren ldquoLMI-based relaxed non-quadratic stabilization conditions for nonlinear systems in theTakagi-Sugenorsquos formrdquo Automatica vol 40 no 5 pp 823ndash8292004

[15] J J Meng W Yin H Li E Hossain and Z Han ldquoCollaborativespectrum sensing from sparse observations in cognitive radionetworksrdquo IEEE Journal on Selected Areas in Communicationsvol 29 no 2 pp 327ndash337 2011

Mathematical Problems in Engineering 13

[16] Z Q Dong S Anand and R Chandramouli ldquoEstimation ofmissing RTTs in computer networks matrix completion vscompressed sensingrdquo Computer Networks vol 55 no 15 pp3364ndash3375 2011

[17] JWright A Y Yang A Ganesh S S Sastry and YMa ldquoRobustface recognition via sparse representationrdquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 31 no 2 pp210ndash227 2009

[18] H Ji C Liu Z Shen and Y Xu ldquoRobust video denoisingusing Low rank matrix completionrdquo in Proceedings of the IEEEComputer Society Conference on Computer Vision and PatternRecognition (CVPR rsquo10) pp 1791ndash1798 San Francisco CalifUSA June 2010

[19] W U Bajwa J Haupt A M Sayeed and R Nowak ldquoCom-pressed channel sensing a new approach to estimating sparsemultipath channelsrdquo Proceedings of the IEEE vol 98 no 6 pp1058ndash1076 2010

[20] G Taubock F Hlawatsch D Eiwen and H Rauhut ldquoCom-pressive estimation of doubly selective channels in multicarriersystems leakage effects and sparsity-enhancing processingrdquoIEEE Journal on Selected Topics in Signal Processing vol 4 no2 pp 255ndash271 2010

[21] Y Tachwali W J Barnes F Basma and H H Refai ldquoThefeasibility of a fast fourier sampling technique for wirelessmicrophone detection in IEEE 80222 air interfacerdquo in Pro-ceedings of the IEEE Conference on Computer CommunicationsWorkshops (INFOCOM rsquo02) pp 1ndash5 IEEE San Diego CalifUSA March 2010

[22] J Meng W Yin H Li E Houssain and Z Han ldquoCollabora-tive spectrum sensing from sparse observations using matrixcompletion for cognitive radio networksrdquo in Proceedings of theIEEE International Conference on Acoustics Speech and SignalProcessing (ICASSP rsquo10) pp 3114ndash3117 Dallas Tex USA March2010

[23] S Pudlewski and T Melodia ldquoOn the performance of compres-sive video streaming for wireless multimedia sensor networksrdquoin Proceedings of the IEEE International Conference on Commu-nications (ICC rsquo10) pp 1ndash5 IEEECapeTown SouthAfricaMay2010

[24] A H Mohseni M Babaie-Zadeh and C Jutten ldquoInflatingcompressed samples a joint source-channel coding approachfor noise-resistant compressed sensingrdquo in Proceedings of theIEEE International Conference on Acoustics Speech and SignalProcessing (ICASSP rsquo09) pp 2957ndash2960 IEEE Taipei TaiwanApril 2009

[25] S J Benson Y Ye and X Zhang ldquoSolving large-scale sparsesemidefinite programs for combinatorial optimizationrdquo SIAMJournal on Optimization vol 10 no 2 pp 443ndash461 2000

[26] Z Lu Y Zhang and X Li ldquoPenalty decomposition methods forrank minimizationrdquo Optimization Methods and Software vol30 no 3 pp 531ndash558 2015

[27] P Jain R Meka and I S Dhillon ldquoGuaranteed rank mini-mization via singular value projectionrdquo in Advances in NeuralInformation Processing Systems pp 937ndash945 2010

[28] J P Haldar and D Hernando ldquoRank-constrained solutions tolinear matrix equations using powerfactorizationrdquo IEEE SignalProcessing Letters vol 16 no 7 pp 584ndash587 2009

[29] R H Keshavan A Montanari and S Oh ldquoMatrix completionfrom a few entriesrdquo IEEE Transactions on Information Theoryvol 56 no 6 pp 2980ndash2998 2010

[30] YNesterov ANemirovskii andY Ye Interior-Point PolynomialAlgorithms in Convex Programming Society for Industrial andApplied Mathematics Philadelphia Pa USA 1994

[31] M Weimin Matrix completion models with fixed basis coef-ficients and rank regularized problems with hard constraints[PhD thesis] National University of Singapore 2013

[32] M Fazel Matrix rank minimization with applications [PhDthesis] Stanford University 2002

[33] K M Grigoriadis and E B Beran ldquoAlternating projectionalgorithms for linear matrix inequalities problems with rankconstraintsrdquo in Advances in Linear Matrix Inequality Methodsin Control pp 512ndash267 Society for Industrial and AppliedMathematics Philadelphia Pa USA 1999

[34] Y Gao and D Sun ldquoA majorized penalty approach for cali-brating rank constrained correlation matrix problemsrdquo 2010httpwwwmathnusedusgsimmatsundfMajorPenpdf

[35] S Burer R D C Monteiro and Y Zhang ldquoMaximum stable setformulations and heuristics based on continuous optimizationrdquoMathematical Programming vol 94 no 1 pp 137ndash166 2002

[36] R BhatiaMatrix Analysis Springer Berlin Germany 1997[37] Z Lu ldquoSequential convex programming methods for a class

of structured nonlinear programmingrdquo httpsarxivorgabs12103039

[38] R A Horn and C R Johnson Matrix Analysis CambridgeUniversity Press Cambridge UK 1985

[39] B Recht M Fazel and P A Parrilo ldquoGuaranteed minimum-rank solutions of linear matrix equations via nuclear normminimizationrdquo SIAM Review vol 52 no 3 pp 471ndash501 2010

[40] httpwwwstanfordedusimyyyeyyyeGset[41] C Helmberg and F Rendl ldquoA spectral bundle method for

semidefinite programmingrdquo SIAM Journal on Optimizationvol 10 no 3 pp 673ndash696 2000

[42] M X Goemans and D P Williamson ldquoImproved approxima-tion algorithms for maximum cut and satisfiability problemsusing semidefinite programmingrdquo Journal of ACM vol 42 no6 pp 1115ndash1145 1995

[43] G A Kochenberger J-K Hao Z Lu H Wang and F GloverldquoSolving large scale max cut problems via tabu searchrdquo Journalof Heuristics vol 19 no 4 pp 565ndash571 2013

[44] G Palubeckis andVKrivickiene ldquoApplication ofmultistart tabusearch to the max-cut problemrdquo Information Technology andControl vol 31 no 2 pp 29ndash35 2004

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Mathematical Problems in Engineering 5

(a) When Γ cap Ω is 119877119898times119899 closed-form solutions canbe obtained for example when solving model(1) 119883119894+1 = (119860

lowast119860 + 120588119868)

minus1(120588119880119894119882(119880119894)119879119883119894) are

obtained by the first-order optimality condi-tions

0 isin 120597119891 (119883) + 120588119883 minus 120588119880119894119882(119880

119894)119879

119883119894 (27)

(b) When Γ cap Ω is a general convex set the closed-form solution is hard to get but subproblem(26) can be solved by some convex optimizationsoftware programs (such as CVX and CPLEX)According to different closed convex constraintsets we can pick and choose suitable convexprogramming algorithm to combine with ourmethod The toolbox listed in this paper is justan alternative choice

(2) The stop rule that we adopted here is 119883119894+1 minus119883119894119865le 120576

with some 120576 small enough

23 Convergence Analysis In this subsection we analyzethe convergence properties of the sequence 119883119894 generatedby above algorithm Before we prove the main convergenceresult we give the following lemma which is analogous toLemma 1 in [10]

Lemma 2 For any 1198841and 119884

2isin 119877119898times119899

100381710038171003817100381710038171198841minus 1198842

10038171003817100381710038171003817119865le10038171003817100381710038171198841 minus 1198842

1003817100381710038171003817119865 (28)

where 119884119894is the pre-119896 singular values approximate of 119884

119894(119894 =

1 2)

Proof Without loss of generality we assume 119898 le 119899 and thesingular value decomposition 119884

1= 11988011198781119881119879

11198842= 11988021198782119881119879

2

with

1198781= (

diag (120590) 00 0

) isin 119877119898times119899

1198782= (

diag (120574) 00 0

) isin 119877119898times119899

(29)

where 120590 = (1205901 1205902 120590

119904) 1205901ge sdot sdot sdot ge 120590

119904gt 0 and 120574 =

(1205741 1205742 120574

119905) 1205741ge sdot sdot sdot ge 120574

119905gt 0 119884

119894can be written as

119884119894= 119880119894119882119878119894119881119879

119894 (119894 = 1 2) Note that 119880

1 1198811 1198802 1198812are

orthogonal matrices we have

10038171003817100381710038171198841 minus 11988421003817100381710038171003817

2

119865minus100381710038171003817100381710038171198841minus 1198842

10038171003817100381710038171003817

2

119865

= Tr ((1198841minus 1198842)119879

(1198841minus 1198842))

minus Tr ((1198841minus 1198842)119879

(1198841minus 1198842))

= Tr (11988411987911198841minus 1198841

119879

1198841+ 119884119879

21198842minus 1198842

119879

1198842)

minus 2Tr (11988411987911198842minus 1198841

119879

1198842)

=

119904

sum

119894=1

1205902

119894minus

119896

sum

119894=1

1205902

119894+

119905

sum

119894=1

1205742

119894minus

119896

sum

119894=1

1205742

119894

minus 2Tr (11988411987911198842minus 1198841

119879

1198842)

(30)

And we note that

Tr (11988411987911198842minus 1198841

119879

1198842) = Tr ((119884

1minus 1198841)119879

(1198842minus 1198842)

+ (1198841minus 1198841)119879

1198842+ 1198841

119879

(1198842minus 1198842))

= Tr (1198811(1198781minus1198821198781)119879

119880119879

11198802(1198782minus1198821198782) 119881119879

2

+ 1198811(1198781minus1198821198781)119879

119880119879

111988021198821198782119881119879

2

+ 1198811(119882119878)119879119880119879

11198802(1198782minus1198821198782) 119881119879

2)

= Tr ((1198781minus1198821198781)119879

119880 (1198782minus1198821198782) 119881119879

+ (1198781minus1198821198781)119879

1198801198821198782119881119879

+ (1198821198781)119879

119880 (1198782minus1198821198782) 119881119879)

(31)

where 119880 = 11988011987911198802 119881 = 119881

119879

11198812are also orthogonal matrices

Next we estimate an upper bound for Tr(11988411987911198842minus 1198841

119879

1198842)

According to Theorem 749 in [38] we get that an orthog-onal matrix 119880 is a maximizing matrix for the problemmaxTr(119860119880)119880 is orthogonal if and only if 119860119880 is positivesemidefinite matrix Thus Tr((119878

1minus 119882119878

1)119879119880(1198782minus 119882119878

2)119881119879)

Tr((1198781minus 119882119878

1)1198791198801198821198782119881119879) and Tr((119882119878)119879119880(119878

2minus 119882119878

2)119881119879)

achieve their maximum if and only if (1198781minus 119882119878

1)119879119880(1198782minus

1198821198782)119881119879 (1198781minus 1198821198781)1198791198801198821198782119881119879 and (119882119878

1)119879119880(1198782minus 1198821198782)119881119879

are all positive semidefinite It is known that when 119860119861 ispositive semidefinite

Tr (119860119861) = sum119894

120590119894(119860119861) le sum

119894

120590119894(119860) 120590119894(119861) (32)

Applying (32) to above three terms we have

Tr ((1198781minus1198821198781)119879

119880 (1198782minus1198821198782) 119881119879)

le sum

119894

120590119894(1198781minus1198821198781) 120590119894(1198782minus1198821198782)

Tr ((1198781minus1198821198781)119879

1198801198821198782119881119879)

le sum

119894

120590119894(1198781minus1198821198781) 120590119894(1198821198782)

Tr ((1198821198781)119879

119880(1198782minus1198821198782) 119881119879)

le sum

119894

120590119894(1198821198781) 120590119894(1198782minus1198821198782)

(33)

6 Mathematical Problems in Engineering

Hence without loss of generality assuming 119896 le 119904 le 119905 we have

10038171003817100381710038171198841 minus 11988421003817100381710038171003817

2

119865minus100381710038171003817100381710038171198841minus 1198842

10038171003817100381710038171003817

2

119865

ge

119904

sum

119894=1

1205902

119894minus

119896

sum

119894=1

1205902

119894+

119905

sum

119894=1

1205742

119894minus

119896

sum

119894=1

1205742

119894minus 2(

119904

sum

119894=119896+1

120590119894120574119894)

=

119904

sum

119894=119896+1

(120590119894minus 120574119894)2

+

119905

sum

119894=119904+1

1205742

119894ge 0

(34)

which implies that 1198841minus 1198842119865le 1198841minus 1198842119865holds

Next we claim the value of iterations of stepwise linearapproximative algorithm is nonincreasing

Theorem 3 Let 119883119894 be the iterative sequence generated bystepwise linear approximative algorithmThen 119865(119883119894) is mon-otonically nonincreasing

Proof Since119883119894+1 isin argmin119867(119883119883119894) | 119883 isin Γ cap Ω we have

119867(119883119894+1 119883119894) le 119867(119883

119894 119883119894)

= 119891 (119883119894) +

120588

2

1003817100381710038171003817100381711988311989410038171003817100381710038171003817

2

119865minus120588

2119892 (119883119894)

= 119865 (119883119894)

(35)

and 120597119892(119883) is the subdifferential of 119892(119883) which implies

forall119883119884 119892 (119883) ge 119892 (119884) + ⟨120597119892 (119884) 119883 minus 119884⟩ (36)

Let119883 = 119883119894+1 119884 = 119883119894 in above inequality we have

119892 (119883119894+1) ge 119892 (119883

119894) + ⟨120597119892 (119883

119894) 119883119894+1minus 119883119894⟩ (37)

Therefore

119867(119883119894+1 119883119894) = 119891 (119883

119894+1) +

120588

2

10038171003817100381710038171003817119883119894+110038171003817100381710038171003817

2

119865

minus120588

2119892 (119883

119894) + ⟨120597119892 (119883

119894) 119883 minus 119883

119894⟩

ge 119891 (119883119894+1) +

120588

2

10038171003817100381710038171003817119883119894+110038171003817100381710038171003817

2

119865minus120588

2119892 (119883119894+1)

= 119865 (119883119894+1)

(38)

Consequently

119865 (119883119894+1) le 119867(119883

119894+1 119883119894) le 119867(119883

119894 119883119894) = 119865 (119883

119894) (39)

We immediately obtain the conclusion as desired

Next we show the convergence of iterative sequencegenerated by stepwise linear approximative algorithm whensolving (23)Theorem 4 shows the convergence when solving(23) with Γ cap Ω = 119877119898times119899

Theorem 4 Let Υ be the set of stable points of problem (23)with no constraints then the iterative sequence 119883119894 generatedby linear approximative algorithm converges to some119883119900119901119905 isin Υwhen 120588 rarr infin

Proof Assuming that 119883optisin Υ is any optimal solution of

problem (23) with Γ cap Ω = 119877119898times119899 according to the first-orderoptimality conditions we have

0 = 120597119891 (119883opt) + 120588119883

optminus 120588119880

opt119882(119880

opt)119879

119883opt (40)

Since 119883119894+1 isin argmin119891(119883) + (1205882)1198832119865minus (1205882)119892(119883

119894) +

⟨120597119892(119883119894) 119883 minus 119883

119894⟩ then

0 = 120597119891 (119883119894+1) + 120588119883

119894+1minus 120588119880119894119882(119880

119894)119879

119883119894 (41)

Equation (41) subtracts (40)

119883119894+1minus 119883

opt=1

120588(120597119891 (119883

119894+1) minus 120597119891 (119883

opt))

+ 119880opt119882(119880

opt)119879

119883opt

minus 119880119894119882(119880

119894)119879

119883119894

(42)

in which 119880opt119882(119880

opt)119879119883

optminus 119880

119894119882(119880119894)119879119883119894

=

119880opt119882sum

opt(119881

opt)119879minus 119880119894119882sum119894(119881

opt)119879

By Lemma 2 we have

100381710038171003817100381710038171003817119880

opt119882(119880

opt)119879

119883optminus 119880119894119882(119880

119894)119879

119883119894100381710038171003817100381710038171003817119865

=

1003817100381710038171003817100381710038171003817100381710038171003817

119880opt119882

opt

sum (119881opt)119879

minus 119880119894119882

119894

sum (119881119894)119879

1003817100381710038171003817100381710038171003817100381710038171003817119865

le10038171003817100381710038171003817119883119894minus 119883

opt10038171003817100381710038171003817119865

(43)

Norming both sides of (42) by the triangle inequalities prop-erty of norms we have

10038171003817100381710038171003817119883119894+1minus 119883

opt10038171003817100381710038171003817119865= lim120588rarrinfin

10038171003817100381710038171003817119883119894+1minus 119883

opt10038171003817100381710038171003817119865

le lim120588rarrinfin

(1

120588

10038171003817100381710038171003817120597119891 (119883

119894+1) minus 120597119891 (119883

opt)10038171003817100381710038171003817119865

+10038171003817100381710038171003817119883119894minus 119883

opt10038171003817100381710038171003817119865) = lim120588rarrinfin

1

120588

10038171003817100381710038171003817120597119891 (119883

119894+1)

minus 120597119891 (119883opt)10038171003817100381710038171003817119865+ lim120588rarrinfin

10038171003817100381710038171003817119883119894minus 119883

opt10038171003817100381710038171003817119865=10038171003817100381710038171003817119883119894

minus 119883opt10038171003817100381710038171003817119865

(44)

which implies that the sequence 119883119894 minus 119883opt119865 is monoton-

ically nonincreasing when 120588 rarr infin We then observe that119883119894minus 119883

opt119865 is bounded and hence lim

119894rarrinfin119883119894minus 119883

opt119865

exists denoted by 119883 minus 119883opt119865 where 119883 is the limited point

of 119883119894 Let 119894 rarr infin in (41) by the continuity we get

Mathematical Problems in Engineering 7

0 = 120597119891(119883) + 120588119883 minus 120588119880119882119880119879

119883 which implies that 119883 isin ΥBy setting119883 = 119883opt we have

lim119894rarrinfin

10038171003817100381710038171003817119883119894minus 119883

opt10038171003817100381710038171003817119865 =10038171003817100381710038171003817119883 minus 119883

opt10038171003817100381710038171003817119865 = 0 (45)

which completes the proof

Now we are ready to give the convergence of iterativesequence when solving (23) with general closed convex ΓcapΩ

Theorem 5 Let 120585 be the set of optimal solutions of problem(23) then the iterative sequence 119884119894 generated by stepwiselinear approximative algorithm converges to some 119884119900119901119905 isin 120585when 120588 rarr infin

Proof Generally speaking subproblem (26) can be solved byprojecting a solution obtained via solving a problem withno constraints onto a closed convex set that is 119884119894+1 =

119875ΓcapΩ(119883119894+1) where 119883119894+1 isin argmin119867(119883119883119894) According to

Theorem 4 the sequence 119883119894 minus 119883opt119865 is monotonically

nonincreasing and convergent when 120588 rarr infin together withthewell known fact that119875 is a nonexpansive operator we have

10038171003817100381710038171003817119884119894minus 119884

opt10038171003817100381710038171003817119865 =10038171003817100381710038171003817119875ΓcapΩ

(119883119894) minus 119875ΓcapΩ

(119883opt)10038171003817100381710038171003817119865

le10038171003817100381710038171003817119883119894minus 119883

opt10038171003817100381710038171003817119865 (46)

where119884opt is the projection of119883opt onto ΓcapΩ Clearly119884optisin

120585Hence 0 le lim

119894rarrinfin119884119894minus 119884

opt119865le lim119894rarrinfin

119883119894minus 119883

opt119865=

0 which implies that 119884119894 converge to 119884opt We immediatelyobtain the conclusion

3 Numerical Results

In this section we demonstrate the performance of ourmethod proposed in Section 2 by applying it to solve affinerank minimization problems (image reconstruction andmatrix completion) andmax-cut problemsThe codes imple-mented in this section are written in MATLAB and allexperiments are performed in MATLAB R2013a runningWindows 7

31 Image Reconstruction Problems In this subsection weapply our method to solve one case of affine rank minimiza-tion problems It can be formulated as

min 1

2A (119883) minus 119887

2

119865

st rank (119883) le 119896

119883 isin 119877119898times119899

(47)

where 119883 isin 119877119898times119899 is the decision variable and the linear mapA 119877

119898times119899rarr 119877119901 and vector 119887 isin 119877119901 are known Clearly (47)

is a special case of problem (3) Hence our method can besuitably applied to (47)

Recht et al [39] have demonstrated that when we samplelinearmaps froma class of probability distributions then they

0

50

100

150

200

250

300

SNR

(dB)

800 1000 1200 1400 1600 1800 2000 2200 2400600The number of linear constraints

GaussBinary

Sparse BinaryRandom Projection

Figure 1 Performance curves under four different measurementmatrices SNR versus the number of linear constraints

would obey the Restricted Isometry Property There are twoingredients for a random linear map to be nearly isometricFirst it must be isometric in expectation Second the prob-ability of large distortions of length must be exponentiallysmall In our numerical experiments we use four nearlyisometric measurements The first one is the ensemble withindependent identically distributed (iid) Gaussian entriesThe second one has entries sampled from an iid symmetricBernoulli distribution which we call BernoulliThe third onehas zeros in two-thirds of the entries and others sampledfrom an iid symmetric Bernoulli distribution which wecall Sparse Bernoulli The last one has an orthonormal basiswhich we call Random Projection

Now we conduct numerical experiments to test the per-formance of our method for solving (47) To illustrate thescaling of low rank recovery for a particular matrix 119883consider the MIT logo presented in [39] The image has 46rows and 81 columns (the total number of pixels is 46 times81 = 3726) Since the logo only has 5 distinct rows it hasrank 5 We sample it using Gaussian iid Bernoulli iidSparse Bernoulli iid and Random Projection measure-ments matrix with the number of linear constraints rangingbetween 700 and 2400 We declared MIT logo 119883 to berecovered if 119883 minus 119883opt

119865119883119865le 10minus3 that is SNR ge60 dB

(SNR = minus20 log(119883 minus 119883opt119865119883119865)) Figures 1 and 2 show

the results of our numerical experimentsFigure 1 plots the curves of SNR under four different

measurement matrices mentioned above with different num-bers of linear constraints The numerical values on the 119909-axisrepresent the number of linear constraints 119901 that is numberof measurements while those on 119910-axis represent the Signalto Noise Ration between the true original 119883 and 119883opt recov-ered by our method Noticeably Gauss and Sparse Binarycan successfully reconstruct MIT logo images from about1000 constraints while Random Projection requires about

8 Mathematical Problems in Engineering

800 900 1000 1100 1200 1300 1400 1500700The number of linear constraints

0

20

40

60

80

100

120

140SN

R (d

B)

GaussBinary

Sparse BinaryRandom Projection

Figure 2 Performance curves of nuclear norm heuristic methodunder four different measurementmatrices SNR versus the numberof linear constraints

1200 constraints Binary requires the minimal constraintsabout 950TheGauss and Sparse Binary measurements reachtheir maximal SNR of 220 dB at around 1500 constraintsthe Binary measurements can reach its maximal SNR of180 dB at around 1400 constraints Random Projection hasmore large phase transition point but has the best recoveryerror We observe a sharp transition to perfect recovery near1000 measurements lower than 2119896(119898 + 119899 minus 119896) Comparedwith Figure 2 which shows results of nuclear norm heuristicmethod in [39] and the method that is a kind of convexrelaxations there is a sharp transition to perfect recoverynear 1200 measurements which is approximately equal to2119896(119898 + 119899 minus 119896) The comparison shows our method hasbetter performance that it can recover an image with lessconstraints namely it needs less samples In addition weobserve that SNR of nuclear norm heuristic method achieves120 dB when the number of measurements approximatelyequals 1500 while ourmethod just needs 1200measurementswhich decrease by 20 compared with 1500 If the numberof measurements is 1500 accuracy of our method wouldbe higher Among precisions of four different measurementmatrixes the smallest one is precision of Random ProjectionHowever the precision value is also higher than 120 dB Theperformance illustrates that our method is more accurateObserving the curve of between 1200 and 1500 we can findthat our curve rises smoothly compared with the curve in thesame interval in Figure 2 this shows that ourmethod is stablefor reconstructing image

Figure 3 shows recovered images under four differentmeasurement matrices with 800 900 and 1000 linear con-straints And Figure 4 shows recovered images by ourmethodand that by SeDuMi We chose to use this interior-pointmethod because it yields the highest accuracy Distinctly therecovered images by SeDuMi have some shadow while oursis clear Moreover as far as time is concerned our methodis more fast and the computational time is not extremely

Table 1 DC versus SeDuMi in time (second)

119901 800 900 1000 1100 1200 1500DC 4682 5049 5078 5138 5196 5409SeDuMi 9326 12016 13824 15944 16176 24538

sensitive to119901 compared to SeDuMiThe reason is that closed-form solutions can be obtained when we solve this problemComparisons are shown in Table 1

32 Matrix Completion Problems In this subsection weapply our method to matrix completion problems It can bereformulated as

min 1

2

1003817100381710038171003817119875Ω (119883) minus 119875Ω (119872)1003817100381710038171003817

2

119865

st rank (119883) le 119896

119883 isin 119877119898times119899

(48)

where 119883 and 119872 are both 119898 times 119899 matrices and Ω is a subsetof index pairs (119894 119895) Up to now numerous methods wereproposed to solve the matrix completion problem (see eg[9 10 29 31]) It is easy to see that problem (48) is a specialcase of general rank minimization problem (3) with 119891(119883) =(12)119875

Ω(119883) minus 119875

Ω(119872)2

119865

In our experiments we first created random matrices119872119871isin 119877119898times119896 and 119872

119877isin 119877119898times119896 with iid Gaussian entries

and then set 119872 = 119872119871119872119879

119877 We sampled a subset Ω of 119901

entries uniformly at random We compare against singularvalue thresholding (SVT) method [9] FPC FPCA [10] andthe OptSpace (OPT) method [29] using

1003817100381710038171003817119875Ω (119883) minus 119875Ω (119872)1003817100381710038171003817119865

1003817100381710038171003817119875Ω (119872)1003817100381710038171003817119865

le 10minus3 (49)

as a stop criterion The maximum number of iterations is nomore than 1500 in all methods other parameters are defaultWe report result averaged over 10 runs

We use SR = 119901(119898119899) to denote sampling ratio andFR = 119896(119898 + 119899 minus 119896)119901 to represent the dimension of the setof rank 119896 matrices divided by the number of measurementsNS and AT denote number of matrices that are recoveredsuccessfully and average times (seconds) that are successfullysolved respectively RE represents the average relative error ofthe successful recovered matrices We use the relative error

1003817100381710038171003817119872 minus 119883lowast1003817100381710038171003817119865

119872119865 (50)

to estimate the closeness of119883lowast to119872 where119883lowast is the optimalsolution to (48) produced by our method We declared119872 tobe recovered if the relative error was less than 10minus3

Table 2 shows that our method has the close performancewith OptSpace in column of NS These performances arebetter than those of SVT FPC and FPCA However SVTFPC and FPCA cannot obtain the optimal matrix within1500 iterationswhich shows that ourmethodhas advantage infinding the neighborhood of optimal solutionThe results donot mean that these algorithms are invalid but themaximum

Mathematical Problems in Engineering 9

Table 2 Numerical results for problems with119898 = 119899 = 40 and SR = 05

Alg 119896 FR NS AT RE 119896 FR NS AT RADC

1 00988

10 002 123119864 minus 04

6 05550

9 012 464119864 minus 04

SVT 10 011 159119864 minus 04 0 mdash mdashFPC 9 013 652119864 minus 04 0 mdash mdashFPCA 10 003 398119864 minus 05 0 mdash mdashOptSpace 10 001 131119864 minus 04 10 021 405119864 minus 04

DC

2 01950

10 003 150119864 minus 04

7 06388

9 017 611119864 minus 04

SVT 9 018 183119864 minus 04 0 mdash mdashFPC 8 014 652119864 minus 04 0 mdash mdashFPCA 10 002 583119864 minus 05 0 mdash mdashOptSpace 10 001 151119864 minus 04 9 037 537119864 minus 04

DC

3 02888

10 005 268119864 minus 04

8 07200

7 028 790119864 minus 04

SVT 2 036 247119864 minus 04 0 mdash mdashFPC 5 025 858119864 minus 04 0 mdash mdashFPCA 10 006 138119864 minus 04 0 mdash mdashOptSpace 10 004 175119864 minus 04 9 082 668119864 minus 04

DC

4 03800

10 006 284119864 minus 04

9 07987

4 040 893119864 minus 04

SVT 1 062 138119864 minus 04 0 mdash mdashFPC 1 025 872119864 minus 04 0 mdash mdashFPCA 0 mdash mdash 0 mdash mdashOptSpace 10 005 268119864 minus 04 4 148 928119864 minus 04

DC

5 04688

10 008 382119864 minus 04

SVT 0 mdash mdashFPC 0 mdash mdashFPCA 0 mdash mdashOptSpace 10 009 316119864 minus 04

Table 3 Numerical results for problems with119898 = 119899 = 100 and SR = 02

Alg 119896 FR NS AT RE 119896 FR NS AT RADC

1 00995

10 030 191119864 minus 04

5 04875

10 150 505119864 minus 04

SVT 7 182 204119864 minus 04 0 mdash mdashFPC 10 314 806119864 minus 04 0 mdash mdashFPCA 10 012 173119864 minus 04 0 mdash mdashOptSpace 10 003 171119864 minus 04 10 058 557119864 minus 04

DC

2 01980

10 077 334119864 minus 04

6 05820

6 282 745119864 minus 04

SVT 0 mdash mdash 0 mdash mdashFPC 0 mdash mdash 0 mdash mdashFPCA 8 021 297119864 minus 04 0 mdash mdashOptSpace 10 005 227119864 minus 04 9 097 657119864 minus 04

DC

3 02955

10 064 303119864 minus 04

7 06755

5 324 766119864 minus 04

SVT 0 mdash mdash 0 mdash mdashFPC 0 mdash mdash 0 mdash mdashFPCA 6 009 220119864 minus 04 0 mdash mdashOptSpace 10 012 376119864 minus 04 5 189 795119864 minus 04

DC

4 03920

9 116 440119864 minus 04

SVT 0 mdash mdashFPC 0 mdash mdashFPCA 0 mdash mdashOptSpace 10 022 385119864 minus 04

10 Mathematical Problems in Engineering

Figure 3 Recovered images under four measurement matrices with different numbers of linear constraints From left to right the originalresults under Gauss Binary Sparse Binary and Random Projection measurement From top to bottom 119901 = 800 119901 = 900 and 119901 = 1000

Table 4 Numerical results for problems with119898 = 119899 = 100 and SR = 03

Alg 119896 FR NS AT RE 119896 FR NS AT RADC

1 00663

10 019 149E minus 04

7 04530

10 061 364E minus 04SVT 10 068 166E minus 04 0 mdash mdashFPC 10 133 447E minus 04 0 mdash mdashFPCA 10 017 612E minus 05 0 mdash mdashOptSpace 10 005 129E minus 04 10 047 339E minus 04DC

2 01320

10 020 169E minus 04

8 05120

10 087 470E minus 04SVT 8 139 166E minus 04 0 mdash mdashFPC 10 218 484E minus 04 0 mdash mdashFPCA 10 016 768E minus 05 0 mdash mdashOptSpace 10 004 148E minus 04 10 063 367E minus 04DC

3 01970

10 025 192E minus 04

9 05730

9 103 485E minus 04SVT 4 177 123E minus 04 0 mdash mdashFPC 9 432 582E minus 04 0 mdash mdashFPCA 9 019 121E minus 04 0 mdash mdashOptSpace 10 007 198E minus 04 10 140 487E minus 04DC

4 02613

10 032 230E minus 04

10 06333

8 136 621E minus 04SVT 4 982 467E minus 04 0 mdash mdashFPC 7 502 659E minus 04 0 mdash mdashFPCA 10 018 663E minus 04 0 mdash mdashOptSpace 10 009 208E minus 04 10 180 597E minus 04DC

5 03250

10 037 272E minus 04

11 06930

9 219 729E minus 04SVT 0 mdash mdash 0 mdash mdashFPC 1 818 734E minus 04 0 mdash mdashFPCA 10 010 220E minus 04 9 mdash mdashOptSpace 10 019 267E minus 04 5 411 705E minus 04DC

6 03880

10 047 300E minus 04

12 07520

6 283 879E minus 04SVT 0 mdash mdash 0 mdash mdashFPC 0 mdash mdash 0 mdash mdashFPCA 0 009 220E minus 04 0 mdash mdashOptSpace 10 025 325E minus 04 6 644 847E minus 04

number of iterations is limited to 1500 When the rank ofmatrix is large (ie 119896 ge 5) the average time of our algorithmis less than that of OptSpace In some cases the relative errorsof DC are minimal When the dimension of matrix becomeslargerDCmethodhas a better performance ofNS and similarrelative errors with that of OptSpace In terms of time ourtime is longer than that of OptSpace but shorter than thatof FPC and SVT Tables 3 and 4 show the above results In

conclusion our method has better performance in ldquohardrdquoproblems that is FR gt 034 The problems involved matricesthat are not of very low rank according to [10]

Table 5 gives the numerical results for problemswith largedimensions In these numerical experiments the maximumnumber of iterations is chosen by default of each comparedmethod We observe that our method can recover problemssuccessfully as other four methods while our method is

Mathematical Problems in Engineering 11

Figure 4 Recovered images under different numbers of linear con-straints From left to right the original results under our methodand results by SeDuMi From top to bottom 119901 = 800 119901 = 900 and119901 = 1000

Table 5 Numerical results for problems with different dimension

119898(119899) 119896 119901 SR FR Alg NS AT RA

100 10 5666 057 034

DC 10 014 191E minus 04SVT 10 281 185E minus 04FPC 10 523 183E minus 04FPCA 10 028 494E minus 05

OptSpace 10 09 141E minus 04

200 10 15665 039 025

DC 10 080 165E minus 04SVT 10 293 253E minus 04FPC 10 418 273E minus 04FPCA 10 008 688E minus 05

OptSpace 10 041 172E minus 04

300 10 27160 030 022

DC 10 272 167E minus 04SVT 10 257 179E minus 04FPC 10 98 163E minus 04FPCA 10 039 671E minus 05

OptSpace 10 185 150E minus 04

400 10 41269 026 019

DC 10 573 156E minus 04SVT 10 278 166E minus 04FPC 10 1328 130E minus 04FPCA 10 073 665E minus 05

OptSpace 10 347 146E minus 04

500 10 49471 020 02

DC 10 1363 173E minus 04SVT 10 294 174E minus 04FPC 10 2575 144E minus 04FPCA 10 112 107E minus 04

OptSpace 10 505 160E minus 04

1000 10 119406 012 017

DC 10 35641 170E minus 04SVT 10 456 166E minus 04FPC 10 11053 129E minus 04FPCA 10 516 164E minus 04

OptSpace 10 1461 150E minus 04

slower with the increase of dimension However our methodis faster than FPC Though SVT FPCA and OptSpace out-perform our method in terms of speed they are specifically

developedmethod formatrix completion problemswhile ourmethod can be applied to more class of problems To provethis we will apply our method to max-cut problems

33 Max-Cut Problems In this subsection we apply ourmethod tomax-cut problems It can be formulated as follows

min Tr (119871119883)st diag (119883) = 119890

rank (119883) = 1

119883 ⪰ 0

(51)

where119883 = 119909119909119879 119909 = (1199091 119909

119899)119879 119871 = (14)(diag(119882119890) minus119882)

119890 = (1 1 1)119879 and119882 is the weightmatrix Equation (51) is

one general case of problem (3) with Γ cap Ω = 119883 | diag(119883) =119890119883 isin 119878

+ Hence our method can be suitably applied to (51)

Wenowaddress the initialization and termination criteriafor our method when applied to (51) We set 1198830 = 119890119890119879 inaddition we choose the initial penalty parameter 120588 to be 001and set the parameter 120591 to be 11 And we use 119883119894+1minus119883119894

119865le 120576

as the termination criteria with 120598 = 10minus5In our numerical experiments we first test several small

scale max-cut problems then we perform our method onthe standard ldquo119866rdquo set problems written by Professor Ginaldi[40] In all we consider 10 problems ranging in size from800 variables to 1000 variables The results are shown inTables 6 and 7 in which ldquo119898rdquo and ldquo119896rdquo represent correspondingdimension of graph and the rank of solution respectivelyTheldquoobvrdquo denotes objective values of optimal matrix solutionsoperated by our method

In Table 6 ldquoKVrdquo denotes known optimal objective val-ues and ldquoUBrdquo presents upper bounds for max-cut by thesemidefinite relaxation It is easy to observe that our methodis effective for solving these problems and it has computed theglobal optimal solutions of all problems Moreover we obtainlower rank solution compared with semidefinite relaxationmethod

Table 7 shows the numerical results for standard ldquo119866rdquo setproblems ldquoSUBrdquo denotes upper bounds by spectral bundlemethod proposed by Helmberg and Rendl [41] ldquoGW-cutrdquodenotes the approximative algorithm proposed by Goemansand Williamson [42] ldquoDCrdquo denotes results generated byour method DC+ is the results of DC with neighborhoodapproximation algorithm and the right most column titledBK gives best results as we have known according to [43 44]We use ldquomdashrdquo to indicate that we do not obtain the rank FromTable 3 we can note that our optimal values are close tothe results of latest researches better than values given byGW-cut and they are all less than the corresponding upperbounds For the problems we all get the lower rank solutioncompared with GW-cut moreover most solutions satisfy theconstraints rank = 1 The results prove that our algorithm iseffective when used to solve max-cut problems

4 Concluding Remarks

In this paper we used closed-form of penalty function toreformulate convex constrained rankminimization problems

12 Mathematical Problems in Engineering

Table 6 Numerical results for famous max-cut problems

Graph 119898 KV GW-cut DC Optimal solutionsUB 119896 obv 119896

C5 5 4 4525 5 4 1 (minus1 1 minus1 1 minus1)K5 5 6 625 5 6 1 (minus1 1 minus1 1 minus1)AG 5 928 96040 5 928 1 (minus1 1 1 minus1 minus1)BG 12 88 903919 12 88 1 (minus1 1 1 minus1 minus1 minus1 minus1 minus1 minus1 1 1 1)

Table 7 Numerical results for G problems

Graph 119898 SUB GW-cut DC DC+ BKobv 119896 obv 119896

G01 800 12078 10612 14 11281 1 11518 11624G02 800 12084 10617 31 11261 1 11550 11620G03 800 12077 10610 14 11275 1 11540 11622G11 800 627 551 30 494 2 528 564G14 800 3187 2800 13 2806 1 2980 3063G15 800 3169 2785 70 2767 1 2969 3050G16 800 3172 2782 14 2800 1 2974 3052G43 1000 7027 6174 mdash 6321 1 6519 6660G45 1000 7020 6168 mdash 6348 1 6564 6654G52 1000 4009 3520 mdash 3383 1 3713 3849

as DC programming which could be solved by stepwiselinear approximative method Under some suitable assump-tions we showed that accumulation point of the sequencegenerated by the stepwise linear approximative method was astable point The numerical results on affine rank minimiza-tion and max-cut problems demonstrate that our method isgenerally comparable or superior to some existing methodsin terms of solution quality and it can be applied to a classof problems However the speed of our method needs to beimproved

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

The authors need to acknowledge the fund projectsWanpingYang and Jinkai Zhao are supported by Key Projects of Phi-losophy and Social Sciences Research (Ministry of EducationChina 15JZD012) Soft Science Fund of Shaanxi Province ofChina (2015KRM114) and the Fundamental Research Fundsfor the Central Universities Fengmin Xu is supported bythe Fundamental Research Funds for the Central Universitiesand China NSFC Projects (no 11101325)

References

[1] K-C Toh and S Yun ldquoAn accelerated proximal gradient algo-rithm for nuclear norm regularized linear least squares prob-lemsrdquo Pacific Journal of Optimization vol 6 no 3 pp 615ndash6402010

[2] Y Amit M Fink N Srebro and S Ullman ldquoUncovering sharedstructures in multiclass classificationrdquo in Proceedings of the 24thInternational Conference on Machine Learning (ICML rsquo07) pp17ndash24 ACM Corvallis Ore USA June 2007

[3] L El Ghaoui and P Gahinet ldquoRank minimization under LMIconstraints a framework for output feedback problemsrdquo in Pro-ceedings of the European Control Conference (ECC rsquo93) GahinetP Rank minimization under LMI constraints Groningen TheNetherlands June-July 1993

[4] M Fazel HHindi and S P Boyd ldquoA rankminimization heuris-tic with application to minimum order system approximationrdquoin Proceedings of the IEEE American Control Conference (ACCrsquo01) vol 6 pp 4734ndash4739 Arlington Va USA June 2001

[5] M W Trosset ldquoDistance matrix completion by numericaloptimizationrdquo Computational Optimization and Applicationsvol 17 no 1 pp 11ndash22 2000

[6] B Recht M Fazel and P A Parrilo ldquoGuaranteed minimum-rank solutions of linear matrix equations via nuclear normminimizationrdquo SIAM Review vol 52 no 3 pp 471ndash501 2010

[7] L Wu ldquoFast at-the-money calibration of the libor marketmodel through lagrange multipliersrdquo Journal of ComputationalFinance vol 6 no 2 pp 39ndash77 2003

[8] Z Zhang and L Wu ldquoOptimal low-rank approximation to acorrelationmatrixrdquo Linear Algebra and its Applications vol 364pp 161ndash187 2003

[9] J F Cai E J Candes andZ Shen ldquoA singular value thresholdingalgorithm for matrix completionrdquo SIAM Journal on Optimiza-tion vol 20 no 4 pp 1956ndash1982 2010

[10] S Ma D Goldfarb and L Chen ldquoFixed point and Bregmaniterative methods for matrix rank minimizationrdquoMathematicalProgramming vol 128 no 1-2 pp 321ndash353 2011

[11] K Barman and O Dabeer ldquoAnalysis of a collaborative filterbased on popularity amongst neighborsrdquo IEEE Transactions onInformation Theory vol 58 no 12 pp 7110ndash7134 2012

[12] S T Aditya O Dabeer and B K Dey ldquoA channel codingperspective of collaborative filteringrdquo IEEE Transactions onInformation Theory vol 57 no 4 pp 2327ndash2341 2011

[13] N E Zubov EAMikrinM SMisrikhanovVN RyabchenkoS N Timakov and E A Cheremnykh ldquoIdentification of theposition of an equilibrium attitude of the international spacestation as a problem of stable matrix completionrdquo Journal ofComputer and Systems Sciences International vol 51 no 2 pp291ndash305 2012

[14] T M Guerra and L Vermeiren ldquoLMI-based relaxed non-quadratic stabilization conditions for nonlinear systems in theTakagi-Sugenorsquos formrdquo Automatica vol 40 no 5 pp 823ndash8292004

[15] J J Meng W Yin H Li E Hossain and Z Han ldquoCollaborativespectrum sensing from sparse observations in cognitive radionetworksrdquo IEEE Journal on Selected Areas in Communicationsvol 29 no 2 pp 327ndash337 2011

Mathematical Problems in Engineering 13

[16] Z Q Dong S Anand and R Chandramouli ldquoEstimation ofmissing RTTs in computer networks matrix completion vscompressed sensingrdquo Computer Networks vol 55 no 15 pp3364ndash3375 2011

[17] JWright A Y Yang A Ganesh S S Sastry and YMa ldquoRobustface recognition via sparse representationrdquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 31 no 2 pp210ndash227 2009

[18] H Ji C Liu Z Shen and Y Xu ldquoRobust video denoisingusing Low rank matrix completionrdquo in Proceedings of the IEEEComputer Society Conference on Computer Vision and PatternRecognition (CVPR rsquo10) pp 1791ndash1798 San Francisco CalifUSA June 2010

[19] W U Bajwa J Haupt A M Sayeed and R Nowak ldquoCom-pressed channel sensing a new approach to estimating sparsemultipath channelsrdquo Proceedings of the IEEE vol 98 no 6 pp1058ndash1076 2010

[20] G Taubock F Hlawatsch D Eiwen and H Rauhut ldquoCom-pressive estimation of doubly selective channels in multicarriersystems leakage effects and sparsity-enhancing processingrdquoIEEE Journal on Selected Topics in Signal Processing vol 4 no2 pp 255ndash271 2010

[21] Y Tachwali W J Barnes F Basma and H H Refai ldquoThefeasibility of a fast fourier sampling technique for wirelessmicrophone detection in IEEE 80222 air interfacerdquo in Pro-ceedings of the IEEE Conference on Computer CommunicationsWorkshops (INFOCOM rsquo02) pp 1ndash5 IEEE San Diego CalifUSA March 2010

[22] J Meng W Yin H Li E Houssain and Z Han ldquoCollabora-tive spectrum sensing from sparse observations using matrixcompletion for cognitive radio networksrdquo in Proceedings of theIEEE International Conference on Acoustics Speech and SignalProcessing (ICASSP rsquo10) pp 3114ndash3117 Dallas Tex USA March2010

[23] S Pudlewski and T Melodia ldquoOn the performance of compres-sive video streaming for wireless multimedia sensor networksrdquoin Proceedings of the IEEE International Conference on Commu-nications (ICC rsquo10) pp 1ndash5 IEEECapeTown SouthAfricaMay2010

[24] A H Mohseni M Babaie-Zadeh and C Jutten ldquoInflatingcompressed samples a joint source-channel coding approachfor noise-resistant compressed sensingrdquo in Proceedings of theIEEE International Conference on Acoustics Speech and SignalProcessing (ICASSP rsquo09) pp 2957ndash2960 IEEE Taipei TaiwanApril 2009

[25] S J Benson Y Ye and X Zhang ldquoSolving large-scale sparsesemidefinite programs for combinatorial optimizationrdquo SIAMJournal on Optimization vol 10 no 2 pp 443ndash461 2000

[26] Z Lu Y Zhang and X Li ldquoPenalty decomposition methods forrank minimizationrdquo Optimization Methods and Software vol30 no 3 pp 531ndash558 2015

[27] P Jain R Meka and I S Dhillon ldquoGuaranteed rank mini-mization via singular value projectionrdquo in Advances in NeuralInformation Processing Systems pp 937ndash945 2010

[28] J P Haldar and D Hernando ldquoRank-constrained solutions tolinear matrix equations using powerfactorizationrdquo IEEE SignalProcessing Letters vol 16 no 7 pp 584ndash587 2009

[29] R H Keshavan A Montanari and S Oh ldquoMatrix completionfrom a few entriesrdquo IEEE Transactions on Information Theoryvol 56 no 6 pp 2980ndash2998 2010

[30] YNesterov ANemirovskii andY Ye Interior-Point PolynomialAlgorithms in Convex Programming Society for Industrial andApplied Mathematics Philadelphia Pa USA 1994

[31] M Weimin Matrix completion models with fixed basis coef-ficients and rank regularized problems with hard constraints[PhD thesis] National University of Singapore 2013

[32] M Fazel Matrix rank minimization with applications [PhDthesis] Stanford University 2002

[33] K M Grigoriadis and E B Beran ldquoAlternating projectionalgorithms for linear matrix inequalities problems with rankconstraintsrdquo in Advances in Linear Matrix Inequality Methodsin Control pp 512ndash267 Society for Industrial and AppliedMathematics Philadelphia Pa USA 1999

[34] Y Gao and D Sun ldquoA majorized penalty approach for cali-brating rank constrained correlation matrix problemsrdquo 2010httpwwwmathnusedusgsimmatsundfMajorPenpdf

[35] S Burer R D C Monteiro and Y Zhang ldquoMaximum stable setformulations and heuristics based on continuous optimizationrdquoMathematical Programming vol 94 no 1 pp 137ndash166 2002

[36] R BhatiaMatrix Analysis Springer Berlin Germany 1997[37] Z Lu ldquoSequential convex programming methods for a class

of structured nonlinear programmingrdquo httpsarxivorgabs12103039

[38] R A Horn and C R Johnson Matrix Analysis CambridgeUniversity Press Cambridge UK 1985

[39] B Recht M Fazel and P A Parrilo ldquoGuaranteed minimum-rank solutions of linear matrix equations via nuclear normminimizationrdquo SIAM Review vol 52 no 3 pp 471ndash501 2010

[40] httpwwwstanfordedusimyyyeyyyeGset[41] C Helmberg and F Rendl ldquoA spectral bundle method for

semidefinite programmingrdquo SIAM Journal on Optimizationvol 10 no 3 pp 673ndash696 2000

[42] M X Goemans and D P Williamson ldquoImproved approxima-tion algorithms for maximum cut and satisfiability problemsusing semidefinite programmingrdquo Journal of ACM vol 42 no6 pp 1115ndash1145 1995

[43] G A Kochenberger J-K Hao Z Lu H Wang and F GloverldquoSolving large scale max cut problems via tabu searchrdquo Journalof Heuristics vol 19 no 4 pp 565ndash571 2013

[44] G Palubeckis andVKrivickiene ldquoApplication ofmultistart tabusearch to the max-cut problemrdquo Information Technology andControl vol 31 no 2 pp 29ndash35 2004

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

6 Mathematical Problems in Engineering

Hence without loss of generality assuming 119896 le 119904 le 119905 we have

10038171003817100381710038171198841 minus 11988421003817100381710038171003817

2

119865minus100381710038171003817100381710038171198841minus 1198842

10038171003817100381710038171003817

2

119865

ge

119904

sum

119894=1

1205902

119894minus

119896

sum

119894=1

1205902

119894+

119905

sum

119894=1

1205742

119894minus

119896

sum

119894=1

1205742

119894minus 2(

119904

sum

119894=119896+1

120590119894120574119894)

=

119904

sum

119894=119896+1

(120590119894minus 120574119894)2

+

119905

sum

119894=119904+1

1205742

119894ge 0

(34)

which implies that 1198841minus 1198842119865le 1198841minus 1198842119865holds

Next we claim the value of iterations of stepwise linearapproximative algorithm is nonincreasing

Theorem 3 Let 119883119894 be the iterative sequence generated bystepwise linear approximative algorithmThen 119865(119883119894) is mon-otonically nonincreasing

Proof Since119883119894+1 isin argmin119867(119883119883119894) | 119883 isin Γ cap Ω we have

119867(119883119894+1 119883119894) le 119867(119883

119894 119883119894)

= 119891 (119883119894) +

120588

2

1003817100381710038171003817100381711988311989410038171003817100381710038171003817

2

119865minus120588

2119892 (119883119894)

= 119865 (119883119894)

(35)

and 120597119892(119883) is the subdifferential of 119892(119883) which implies

forall119883119884 119892 (119883) ge 119892 (119884) + ⟨120597119892 (119884) 119883 minus 119884⟩ (36)

Let119883 = 119883119894+1 119884 = 119883119894 in above inequality we have

119892 (119883119894+1) ge 119892 (119883

119894) + ⟨120597119892 (119883

119894) 119883119894+1minus 119883119894⟩ (37)

Therefore

119867(119883119894+1 119883119894) = 119891 (119883

119894+1) +

120588

2

10038171003817100381710038171003817119883119894+110038171003817100381710038171003817

2

119865

minus120588

2119892 (119883

119894) + ⟨120597119892 (119883

119894) 119883 minus 119883

119894⟩

ge 119891 (119883119894+1) +

120588

2

10038171003817100381710038171003817119883119894+110038171003817100381710038171003817

2

119865minus120588

2119892 (119883119894+1)

= 119865 (119883119894+1)

(38)

Consequently

119865 (119883119894+1) le 119867(119883

119894+1 119883119894) le 119867(119883

119894 119883119894) = 119865 (119883

119894) (39)

We immediately obtain the conclusion as desired

Next we show the convergence of iterative sequencegenerated by stepwise linear approximative algorithm whensolving (23)Theorem 4 shows the convergence when solving(23) with Γ cap Ω = 119877119898times119899

Theorem 4 Let Υ be the set of stable points of problem (23)with no constraints then the iterative sequence 119883119894 generatedby linear approximative algorithm converges to some119883119900119901119905 isin Υwhen 120588 rarr infin

Proof Assuming that 119883optisin Υ is any optimal solution of

problem (23) with Γ cap Ω = 119877119898times119899 according to the first-orderoptimality conditions we have

0 = 120597119891 (119883opt) + 120588119883

optminus 120588119880

opt119882(119880

opt)119879

119883opt (40)

Since 119883119894+1 isin argmin119891(119883) + (1205882)1198832119865minus (1205882)119892(119883

119894) +

⟨120597119892(119883119894) 119883 minus 119883

119894⟩ then

0 = 120597119891 (119883119894+1) + 120588119883

119894+1minus 120588119880119894119882(119880

119894)119879

119883119894 (41)

Equation (41) subtracts (40)

119883119894+1minus 119883

opt=1

120588(120597119891 (119883

119894+1) minus 120597119891 (119883

opt))

+ 119880opt119882(119880

opt)119879

119883opt

minus 119880119894119882(119880

119894)119879

119883119894

(42)

in which 119880opt119882(119880

opt)119879119883

optminus 119880

119894119882(119880119894)119879119883119894

=

119880opt119882sum

opt(119881

opt)119879minus 119880119894119882sum119894(119881

opt)119879

By Lemma 2 we have

100381710038171003817100381710038171003817119880

opt119882(119880

opt)119879

119883optminus 119880119894119882(119880

119894)119879

119883119894100381710038171003817100381710038171003817119865

=

1003817100381710038171003817100381710038171003817100381710038171003817

119880opt119882

opt

sum (119881opt)119879

minus 119880119894119882

119894

sum (119881119894)119879

1003817100381710038171003817100381710038171003817100381710038171003817119865

le10038171003817100381710038171003817119883119894minus 119883

opt10038171003817100381710038171003817119865

(43)

Norming both sides of (42) by the triangle inequalities prop-erty of norms we have

10038171003817100381710038171003817119883119894+1minus 119883

opt10038171003817100381710038171003817119865= lim120588rarrinfin

10038171003817100381710038171003817119883119894+1minus 119883

opt10038171003817100381710038171003817119865

le lim120588rarrinfin

(1

120588

10038171003817100381710038171003817120597119891 (119883

119894+1) minus 120597119891 (119883

opt)10038171003817100381710038171003817119865

+10038171003817100381710038171003817119883119894minus 119883

opt10038171003817100381710038171003817119865) = lim120588rarrinfin

1

120588

10038171003817100381710038171003817120597119891 (119883

119894+1)

minus 120597119891 (119883opt)10038171003817100381710038171003817119865+ lim120588rarrinfin

10038171003817100381710038171003817119883119894minus 119883

opt10038171003817100381710038171003817119865=10038171003817100381710038171003817119883119894

minus 119883opt10038171003817100381710038171003817119865

(44)

which implies that the sequence 119883119894 minus 119883opt119865 is monoton-

ically nonincreasing when 120588 rarr infin We then observe that119883119894minus 119883

opt119865 is bounded and hence lim

119894rarrinfin119883119894minus 119883

opt119865

exists denoted by 119883 minus 119883opt119865 where 119883 is the limited point

of 119883119894 Let 119894 rarr infin in (41) by the continuity we get

Mathematical Problems in Engineering 7

0 = 120597119891(119883) + 120588119883 minus 120588119880119882119880119879

119883 which implies that 119883 isin ΥBy setting119883 = 119883opt we have

lim119894rarrinfin

10038171003817100381710038171003817119883119894minus 119883

opt10038171003817100381710038171003817119865 =10038171003817100381710038171003817119883 minus 119883

opt10038171003817100381710038171003817119865 = 0 (45)

which completes the proof

Now we are ready to give the convergence of iterativesequence when solving (23) with general closed convex ΓcapΩ

Theorem 5 Let 120585 be the set of optimal solutions of problem(23) then the iterative sequence 119884119894 generated by stepwiselinear approximative algorithm converges to some 119884119900119901119905 isin 120585when 120588 rarr infin

Proof Generally speaking subproblem (26) can be solved byprojecting a solution obtained via solving a problem withno constraints onto a closed convex set that is 119884119894+1 =

119875ΓcapΩ(119883119894+1) where 119883119894+1 isin argmin119867(119883119883119894) According to

Theorem 4 the sequence 119883119894 minus 119883opt119865 is monotonically

nonincreasing and convergent when 120588 rarr infin together withthewell known fact that119875 is a nonexpansive operator we have

10038171003817100381710038171003817119884119894minus 119884

opt10038171003817100381710038171003817119865 =10038171003817100381710038171003817119875ΓcapΩ

(119883119894) minus 119875ΓcapΩ

(119883opt)10038171003817100381710038171003817119865

le10038171003817100381710038171003817119883119894minus 119883

opt10038171003817100381710038171003817119865 (46)

where119884opt is the projection of119883opt onto ΓcapΩ Clearly119884optisin

120585Hence 0 le lim

119894rarrinfin119884119894minus 119884

opt119865le lim119894rarrinfin

119883119894minus 119883

opt119865=

0 which implies that 119884119894 converge to 119884opt We immediatelyobtain the conclusion

3 Numerical Results

In this section we demonstrate the performance of ourmethod proposed in Section 2 by applying it to solve affinerank minimization problems (image reconstruction andmatrix completion) andmax-cut problemsThe codes imple-mented in this section are written in MATLAB and allexperiments are performed in MATLAB R2013a runningWindows 7

31 Image Reconstruction Problems In this subsection weapply our method to solve one case of affine rank minimiza-tion problems It can be formulated as

min 1

2A (119883) minus 119887

2

119865

st rank (119883) le 119896

119883 isin 119877119898times119899

(47)

where 119883 isin 119877119898times119899 is the decision variable and the linear mapA 119877

119898times119899rarr 119877119901 and vector 119887 isin 119877119901 are known Clearly (47)

is a special case of problem (3) Hence our method can besuitably applied to (47)

Recht et al [39] have demonstrated that when we samplelinearmaps froma class of probability distributions then they

0

50

100

150

200

250

300

SNR

(dB)

800 1000 1200 1400 1600 1800 2000 2200 2400600The number of linear constraints

GaussBinary

Sparse BinaryRandom Projection

Figure 1 Performance curves under four different measurementmatrices SNR versus the number of linear constraints

would obey the Restricted Isometry Property There are twoingredients for a random linear map to be nearly isometricFirst it must be isometric in expectation Second the prob-ability of large distortions of length must be exponentiallysmall In our numerical experiments we use four nearlyisometric measurements The first one is the ensemble withindependent identically distributed (iid) Gaussian entriesThe second one has entries sampled from an iid symmetricBernoulli distribution which we call BernoulliThe third onehas zeros in two-thirds of the entries and others sampledfrom an iid symmetric Bernoulli distribution which wecall Sparse Bernoulli The last one has an orthonormal basiswhich we call Random Projection

Now we conduct numerical experiments to test the per-formance of our method for solving (47) To illustrate thescaling of low rank recovery for a particular matrix 119883consider the MIT logo presented in [39] The image has 46rows and 81 columns (the total number of pixels is 46 times81 = 3726) Since the logo only has 5 distinct rows it hasrank 5 We sample it using Gaussian iid Bernoulli iidSparse Bernoulli iid and Random Projection measure-ments matrix with the number of linear constraints rangingbetween 700 and 2400 We declared MIT logo 119883 to berecovered if 119883 minus 119883opt

119865119883119865le 10minus3 that is SNR ge60 dB

(SNR = minus20 log(119883 minus 119883opt119865119883119865)) Figures 1 and 2 show

the results of our numerical experimentsFigure 1 plots the curves of SNR under four different

measurement matrices mentioned above with different num-bers of linear constraints The numerical values on the 119909-axisrepresent the number of linear constraints 119901 that is numberof measurements while those on 119910-axis represent the Signalto Noise Ration between the true original 119883 and 119883opt recov-ered by our method Noticeably Gauss and Sparse Binarycan successfully reconstruct MIT logo images from about1000 constraints while Random Projection requires about

8 Mathematical Problems in Engineering

800 900 1000 1100 1200 1300 1400 1500700The number of linear constraints

0

20

40

60

80

100

120

140SN

R (d

B)

GaussBinary

Sparse BinaryRandom Projection

Figure 2 Performance curves of nuclear norm heuristic methodunder four different measurementmatrices SNR versus the numberof linear constraints

1200 constraints Binary requires the minimal constraintsabout 950TheGauss and Sparse Binary measurements reachtheir maximal SNR of 220 dB at around 1500 constraintsthe Binary measurements can reach its maximal SNR of180 dB at around 1400 constraints Random Projection hasmore large phase transition point but has the best recoveryerror We observe a sharp transition to perfect recovery near1000 measurements lower than 2119896(119898 + 119899 minus 119896) Comparedwith Figure 2 which shows results of nuclear norm heuristicmethod in [39] and the method that is a kind of convexrelaxations there is a sharp transition to perfect recoverynear 1200 measurements which is approximately equal to2119896(119898 + 119899 minus 119896) The comparison shows our method hasbetter performance that it can recover an image with lessconstraints namely it needs less samples In addition weobserve that SNR of nuclear norm heuristic method achieves120 dB when the number of measurements approximatelyequals 1500 while ourmethod just needs 1200measurementswhich decrease by 20 compared with 1500 If the numberof measurements is 1500 accuracy of our method wouldbe higher Among precisions of four different measurementmatrixes the smallest one is precision of Random ProjectionHowever the precision value is also higher than 120 dB Theperformance illustrates that our method is more accurateObserving the curve of between 1200 and 1500 we can findthat our curve rises smoothly compared with the curve in thesame interval in Figure 2 this shows that ourmethod is stablefor reconstructing image

Figure 3 shows recovered images under four differentmeasurement matrices with 800 900 and 1000 linear con-straints And Figure 4 shows recovered images by ourmethodand that by SeDuMi We chose to use this interior-pointmethod because it yields the highest accuracy Distinctly therecovered images by SeDuMi have some shadow while oursis clear Moreover as far as time is concerned our methodis more fast and the computational time is not extremely

Table 1 DC versus SeDuMi in time (second)

119901 800 900 1000 1100 1200 1500DC 4682 5049 5078 5138 5196 5409SeDuMi 9326 12016 13824 15944 16176 24538

sensitive to119901 compared to SeDuMiThe reason is that closed-form solutions can be obtained when we solve this problemComparisons are shown in Table 1

32 Matrix Completion Problems In this subsection weapply our method to matrix completion problems It can bereformulated as

min 1

2

1003817100381710038171003817119875Ω (119883) minus 119875Ω (119872)1003817100381710038171003817

2

119865

st rank (119883) le 119896

119883 isin 119877119898times119899

(48)

where 119883 and 119872 are both 119898 times 119899 matrices and Ω is a subsetof index pairs (119894 119895) Up to now numerous methods wereproposed to solve the matrix completion problem (see eg[9 10 29 31]) It is easy to see that problem (48) is a specialcase of general rank minimization problem (3) with 119891(119883) =(12)119875

Ω(119883) minus 119875

Ω(119872)2

119865

In our experiments we first created random matrices119872119871isin 119877119898times119896 and 119872

119877isin 119877119898times119896 with iid Gaussian entries

and then set 119872 = 119872119871119872119879

119877 We sampled a subset Ω of 119901

entries uniformly at random We compare against singularvalue thresholding (SVT) method [9] FPC FPCA [10] andthe OptSpace (OPT) method [29] using

1003817100381710038171003817119875Ω (119883) minus 119875Ω (119872)1003817100381710038171003817119865

1003817100381710038171003817119875Ω (119872)1003817100381710038171003817119865

le 10minus3 (49)

as a stop criterion The maximum number of iterations is nomore than 1500 in all methods other parameters are defaultWe report result averaged over 10 runs

We use SR = 119901(119898119899) to denote sampling ratio andFR = 119896(119898 + 119899 minus 119896)119901 to represent the dimension of the setof rank 119896 matrices divided by the number of measurementsNS and AT denote number of matrices that are recoveredsuccessfully and average times (seconds) that are successfullysolved respectively RE represents the average relative error ofthe successful recovered matrices We use the relative error

1003817100381710038171003817119872 minus 119883lowast1003817100381710038171003817119865

119872119865 (50)

to estimate the closeness of119883lowast to119872 where119883lowast is the optimalsolution to (48) produced by our method We declared119872 tobe recovered if the relative error was less than 10minus3

Table 2 shows that our method has the close performancewith OptSpace in column of NS These performances arebetter than those of SVT FPC and FPCA However SVTFPC and FPCA cannot obtain the optimal matrix within1500 iterationswhich shows that ourmethodhas advantage infinding the neighborhood of optimal solutionThe results donot mean that these algorithms are invalid but themaximum

Mathematical Problems in Engineering 9

Table 2 Numerical results for problems with119898 = 119899 = 40 and SR = 05

Alg 119896 FR NS AT RE 119896 FR NS AT RADC

1 00988

10 002 123119864 minus 04

6 05550

9 012 464119864 minus 04

SVT 10 011 159119864 minus 04 0 mdash mdashFPC 9 013 652119864 minus 04 0 mdash mdashFPCA 10 003 398119864 minus 05 0 mdash mdashOptSpace 10 001 131119864 minus 04 10 021 405119864 minus 04

DC

2 01950

10 003 150119864 minus 04

7 06388

9 017 611119864 minus 04

SVT 9 018 183119864 minus 04 0 mdash mdashFPC 8 014 652119864 minus 04 0 mdash mdashFPCA 10 002 583119864 minus 05 0 mdash mdashOptSpace 10 001 151119864 minus 04 9 037 537119864 minus 04

DC

3 02888

10 005 268119864 minus 04

8 07200

7 028 790119864 minus 04

SVT 2 036 247119864 minus 04 0 mdash mdashFPC 5 025 858119864 minus 04 0 mdash mdashFPCA 10 006 138119864 minus 04 0 mdash mdashOptSpace 10 004 175119864 minus 04 9 082 668119864 minus 04

DC

4 03800

10 006 284119864 minus 04

9 07987

4 040 893119864 minus 04

SVT 1 062 138119864 minus 04 0 mdash mdashFPC 1 025 872119864 minus 04 0 mdash mdashFPCA 0 mdash mdash 0 mdash mdashOptSpace 10 005 268119864 minus 04 4 148 928119864 minus 04

DC

5 04688

10 008 382119864 minus 04

SVT 0 mdash mdashFPC 0 mdash mdashFPCA 0 mdash mdashOptSpace 10 009 316119864 minus 04

Table 3 Numerical results for problems with119898 = 119899 = 100 and SR = 02

Alg 119896 FR NS AT RE 119896 FR NS AT RADC

1 00995

10 030 191119864 minus 04

5 04875

10 150 505119864 minus 04

SVT 7 182 204119864 minus 04 0 mdash mdashFPC 10 314 806119864 minus 04 0 mdash mdashFPCA 10 012 173119864 minus 04 0 mdash mdashOptSpace 10 003 171119864 minus 04 10 058 557119864 minus 04

DC

2 01980

10 077 334119864 minus 04

6 05820

6 282 745119864 minus 04

SVT 0 mdash mdash 0 mdash mdashFPC 0 mdash mdash 0 mdash mdashFPCA 8 021 297119864 minus 04 0 mdash mdashOptSpace 10 005 227119864 minus 04 9 097 657119864 minus 04

DC

3 02955

10 064 303119864 minus 04

7 06755

5 324 766119864 minus 04

SVT 0 mdash mdash 0 mdash mdashFPC 0 mdash mdash 0 mdash mdashFPCA 6 009 220119864 minus 04 0 mdash mdashOptSpace 10 012 376119864 minus 04 5 189 795119864 minus 04

DC

4 03920

9 116 440119864 minus 04

SVT 0 mdash mdashFPC 0 mdash mdashFPCA 0 mdash mdashOptSpace 10 022 385119864 minus 04

10 Mathematical Problems in Engineering

Figure 3 Recovered images under four measurement matrices with different numbers of linear constraints From left to right the originalresults under Gauss Binary Sparse Binary and Random Projection measurement From top to bottom 119901 = 800 119901 = 900 and 119901 = 1000

Table 4 Numerical results for problems with119898 = 119899 = 100 and SR = 03

Alg 119896 FR NS AT RE 119896 FR NS AT RADC

1 00663

10 019 149E minus 04

7 04530

10 061 364E minus 04SVT 10 068 166E minus 04 0 mdash mdashFPC 10 133 447E minus 04 0 mdash mdashFPCA 10 017 612E minus 05 0 mdash mdashOptSpace 10 005 129E minus 04 10 047 339E minus 04DC

2 01320

10 020 169E minus 04

8 05120

10 087 470E minus 04SVT 8 139 166E minus 04 0 mdash mdashFPC 10 218 484E minus 04 0 mdash mdashFPCA 10 016 768E minus 05 0 mdash mdashOptSpace 10 004 148E minus 04 10 063 367E minus 04DC

3 01970

10 025 192E minus 04

9 05730

9 103 485E minus 04SVT 4 177 123E minus 04 0 mdash mdashFPC 9 432 582E minus 04 0 mdash mdashFPCA 9 019 121E minus 04 0 mdash mdashOptSpace 10 007 198E minus 04 10 140 487E minus 04DC

4 02613

10 032 230E minus 04

10 06333

8 136 621E minus 04SVT 4 982 467E minus 04 0 mdash mdashFPC 7 502 659E minus 04 0 mdash mdashFPCA 10 018 663E minus 04 0 mdash mdashOptSpace 10 009 208E minus 04 10 180 597E minus 04DC

5 03250

10 037 272E minus 04

11 06930

9 219 729E minus 04SVT 0 mdash mdash 0 mdash mdashFPC 1 818 734E minus 04 0 mdash mdashFPCA 10 010 220E minus 04 9 mdash mdashOptSpace 10 019 267E minus 04 5 411 705E minus 04DC

6 03880

10 047 300E minus 04

12 07520

6 283 879E minus 04SVT 0 mdash mdash 0 mdash mdashFPC 0 mdash mdash 0 mdash mdashFPCA 0 009 220E minus 04 0 mdash mdashOptSpace 10 025 325E minus 04 6 644 847E minus 04

number of iterations is limited to 1500 When the rank ofmatrix is large (ie 119896 ge 5) the average time of our algorithmis less than that of OptSpace In some cases the relative errorsof DC are minimal When the dimension of matrix becomeslargerDCmethodhas a better performance ofNS and similarrelative errors with that of OptSpace In terms of time ourtime is longer than that of OptSpace but shorter than thatof FPC and SVT Tables 3 and 4 show the above results In

conclusion our method has better performance in ldquohardrdquoproblems that is FR gt 034 The problems involved matricesthat are not of very low rank according to [10]

Table 5 gives the numerical results for problemswith largedimensions In these numerical experiments the maximumnumber of iterations is chosen by default of each comparedmethod We observe that our method can recover problemssuccessfully as other four methods while our method is

Mathematical Problems in Engineering 11

Figure 4 Recovered images under different numbers of linear con-straints From left to right the original results under our methodand results by SeDuMi From top to bottom 119901 = 800 119901 = 900 and119901 = 1000

Table 5 Numerical results for problems with different dimension

119898(119899) 119896 119901 SR FR Alg NS AT RA

100 10 5666 057 034

DC 10 014 191E minus 04SVT 10 281 185E minus 04FPC 10 523 183E minus 04FPCA 10 028 494E minus 05

OptSpace 10 09 141E minus 04

200 10 15665 039 025

DC 10 080 165E minus 04SVT 10 293 253E minus 04FPC 10 418 273E minus 04FPCA 10 008 688E minus 05

OptSpace 10 041 172E minus 04

300 10 27160 030 022

DC 10 272 167E minus 04SVT 10 257 179E minus 04FPC 10 98 163E minus 04FPCA 10 039 671E minus 05

OptSpace 10 185 150E minus 04

400 10 41269 026 019

DC 10 573 156E minus 04SVT 10 278 166E minus 04FPC 10 1328 130E minus 04FPCA 10 073 665E minus 05

OptSpace 10 347 146E minus 04

500 10 49471 020 02

DC 10 1363 173E minus 04SVT 10 294 174E minus 04FPC 10 2575 144E minus 04FPCA 10 112 107E minus 04

OptSpace 10 505 160E minus 04

1000 10 119406 012 017

DC 10 35641 170E minus 04SVT 10 456 166E minus 04FPC 10 11053 129E minus 04FPCA 10 516 164E minus 04

OptSpace 10 1461 150E minus 04

slower with the increase of dimension However our methodis faster than FPC Though SVT FPCA and OptSpace out-perform our method in terms of speed they are specifically

developedmethod formatrix completion problemswhile ourmethod can be applied to more class of problems To provethis we will apply our method to max-cut problems

33 Max-Cut Problems In this subsection we apply ourmethod tomax-cut problems It can be formulated as follows

min Tr (119871119883)st diag (119883) = 119890

rank (119883) = 1

119883 ⪰ 0

(51)

where119883 = 119909119909119879 119909 = (1199091 119909

119899)119879 119871 = (14)(diag(119882119890) minus119882)

119890 = (1 1 1)119879 and119882 is the weightmatrix Equation (51) is

one general case of problem (3) with Γ cap Ω = 119883 | diag(119883) =119890119883 isin 119878

+ Hence our method can be suitably applied to (51)

Wenowaddress the initialization and termination criteriafor our method when applied to (51) We set 1198830 = 119890119890119879 inaddition we choose the initial penalty parameter 120588 to be 001and set the parameter 120591 to be 11 And we use 119883119894+1minus119883119894

119865le 120576

as the termination criteria with 120598 = 10minus5In our numerical experiments we first test several small

scale max-cut problems then we perform our method onthe standard ldquo119866rdquo set problems written by Professor Ginaldi[40] In all we consider 10 problems ranging in size from800 variables to 1000 variables The results are shown inTables 6 and 7 in which ldquo119898rdquo and ldquo119896rdquo represent correspondingdimension of graph and the rank of solution respectivelyTheldquoobvrdquo denotes objective values of optimal matrix solutionsoperated by our method

In Table 6 ldquoKVrdquo denotes known optimal objective val-ues and ldquoUBrdquo presents upper bounds for max-cut by thesemidefinite relaxation It is easy to observe that our methodis effective for solving these problems and it has computed theglobal optimal solutions of all problems Moreover we obtainlower rank solution compared with semidefinite relaxationmethod

Table 7 shows the numerical results for standard ldquo119866rdquo setproblems ldquoSUBrdquo denotes upper bounds by spectral bundlemethod proposed by Helmberg and Rendl [41] ldquoGW-cutrdquodenotes the approximative algorithm proposed by Goemansand Williamson [42] ldquoDCrdquo denotes results generated byour method DC+ is the results of DC with neighborhoodapproximation algorithm and the right most column titledBK gives best results as we have known according to [43 44]We use ldquomdashrdquo to indicate that we do not obtain the rank FromTable 3 we can note that our optimal values are close tothe results of latest researches better than values given byGW-cut and they are all less than the corresponding upperbounds For the problems we all get the lower rank solutioncompared with GW-cut moreover most solutions satisfy theconstraints rank = 1 The results prove that our algorithm iseffective when used to solve max-cut problems

4 Concluding Remarks

In this paper we used closed-form of penalty function toreformulate convex constrained rankminimization problems

12 Mathematical Problems in Engineering

Table 6 Numerical results for famous max-cut problems

Graph 119898 KV GW-cut DC Optimal solutionsUB 119896 obv 119896

C5 5 4 4525 5 4 1 (minus1 1 minus1 1 minus1)K5 5 6 625 5 6 1 (minus1 1 minus1 1 minus1)AG 5 928 96040 5 928 1 (minus1 1 1 minus1 minus1)BG 12 88 903919 12 88 1 (minus1 1 1 minus1 minus1 minus1 minus1 minus1 minus1 1 1 1)

Table 7 Numerical results for G problems

Graph 119898 SUB GW-cut DC DC+ BKobv 119896 obv 119896

G01 800 12078 10612 14 11281 1 11518 11624G02 800 12084 10617 31 11261 1 11550 11620G03 800 12077 10610 14 11275 1 11540 11622G11 800 627 551 30 494 2 528 564G14 800 3187 2800 13 2806 1 2980 3063G15 800 3169 2785 70 2767 1 2969 3050G16 800 3172 2782 14 2800 1 2974 3052G43 1000 7027 6174 mdash 6321 1 6519 6660G45 1000 7020 6168 mdash 6348 1 6564 6654G52 1000 4009 3520 mdash 3383 1 3713 3849

as DC programming which could be solved by stepwiselinear approximative method Under some suitable assump-tions we showed that accumulation point of the sequencegenerated by the stepwise linear approximative method was astable point The numerical results on affine rank minimiza-tion and max-cut problems demonstrate that our method isgenerally comparable or superior to some existing methodsin terms of solution quality and it can be applied to a classof problems However the speed of our method needs to beimproved

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

The authors need to acknowledge the fund projectsWanpingYang and Jinkai Zhao are supported by Key Projects of Phi-losophy and Social Sciences Research (Ministry of EducationChina 15JZD012) Soft Science Fund of Shaanxi Province ofChina (2015KRM114) and the Fundamental Research Fundsfor the Central Universities Fengmin Xu is supported bythe Fundamental Research Funds for the Central Universitiesand China NSFC Projects (no 11101325)

References

[1] K-C Toh and S Yun ldquoAn accelerated proximal gradient algo-rithm for nuclear norm regularized linear least squares prob-lemsrdquo Pacific Journal of Optimization vol 6 no 3 pp 615ndash6402010

[2] Y Amit M Fink N Srebro and S Ullman ldquoUncovering sharedstructures in multiclass classificationrdquo in Proceedings of the 24thInternational Conference on Machine Learning (ICML rsquo07) pp17ndash24 ACM Corvallis Ore USA June 2007

[3] L El Ghaoui and P Gahinet ldquoRank minimization under LMIconstraints a framework for output feedback problemsrdquo in Pro-ceedings of the European Control Conference (ECC rsquo93) GahinetP Rank minimization under LMI constraints Groningen TheNetherlands June-July 1993

[4] M Fazel HHindi and S P Boyd ldquoA rankminimization heuris-tic with application to minimum order system approximationrdquoin Proceedings of the IEEE American Control Conference (ACCrsquo01) vol 6 pp 4734ndash4739 Arlington Va USA June 2001

[5] M W Trosset ldquoDistance matrix completion by numericaloptimizationrdquo Computational Optimization and Applicationsvol 17 no 1 pp 11ndash22 2000

[6] B Recht M Fazel and P A Parrilo ldquoGuaranteed minimum-rank solutions of linear matrix equations via nuclear normminimizationrdquo SIAM Review vol 52 no 3 pp 471ndash501 2010

[7] L Wu ldquoFast at-the-money calibration of the libor marketmodel through lagrange multipliersrdquo Journal of ComputationalFinance vol 6 no 2 pp 39ndash77 2003

[8] Z Zhang and L Wu ldquoOptimal low-rank approximation to acorrelationmatrixrdquo Linear Algebra and its Applications vol 364pp 161ndash187 2003

[9] J F Cai E J Candes andZ Shen ldquoA singular value thresholdingalgorithm for matrix completionrdquo SIAM Journal on Optimiza-tion vol 20 no 4 pp 1956ndash1982 2010

[10] S Ma D Goldfarb and L Chen ldquoFixed point and Bregmaniterative methods for matrix rank minimizationrdquoMathematicalProgramming vol 128 no 1-2 pp 321ndash353 2011

[11] K Barman and O Dabeer ldquoAnalysis of a collaborative filterbased on popularity amongst neighborsrdquo IEEE Transactions onInformation Theory vol 58 no 12 pp 7110ndash7134 2012

[12] S T Aditya O Dabeer and B K Dey ldquoA channel codingperspective of collaborative filteringrdquo IEEE Transactions onInformation Theory vol 57 no 4 pp 2327ndash2341 2011

[13] N E Zubov EAMikrinM SMisrikhanovVN RyabchenkoS N Timakov and E A Cheremnykh ldquoIdentification of theposition of an equilibrium attitude of the international spacestation as a problem of stable matrix completionrdquo Journal ofComputer and Systems Sciences International vol 51 no 2 pp291ndash305 2012

[14] T M Guerra and L Vermeiren ldquoLMI-based relaxed non-quadratic stabilization conditions for nonlinear systems in theTakagi-Sugenorsquos formrdquo Automatica vol 40 no 5 pp 823ndash8292004

[15] J J Meng W Yin H Li E Hossain and Z Han ldquoCollaborativespectrum sensing from sparse observations in cognitive radionetworksrdquo IEEE Journal on Selected Areas in Communicationsvol 29 no 2 pp 327ndash337 2011

Mathematical Problems in Engineering 13

[16] Z Q Dong S Anand and R Chandramouli ldquoEstimation ofmissing RTTs in computer networks matrix completion vscompressed sensingrdquo Computer Networks vol 55 no 15 pp3364ndash3375 2011

[17] JWright A Y Yang A Ganesh S S Sastry and YMa ldquoRobustface recognition via sparse representationrdquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 31 no 2 pp210ndash227 2009

[18] H Ji C Liu Z Shen and Y Xu ldquoRobust video denoisingusing Low rank matrix completionrdquo in Proceedings of the IEEEComputer Society Conference on Computer Vision and PatternRecognition (CVPR rsquo10) pp 1791ndash1798 San Francisco CalifUSA June 2010

[19] W U Bajwa J Haupt A M Sayeed and R Nowak ldquoCom-pressed channel sensing a new approach to estimating sparsemultipath channelsrdquo Proceedings of the IEEE vol 98 no 6 pp1058ndash1076 2010

[20] G Taubock F Hlawatsch D Eiwen and H Rauhut ldquoCom-pressive estimation of doubly selective channels in multicarriersystems leakage effects and sparsity-enhancing processingrdquoIEEE Journal on Selected Topics in Signal Processing vol 4 no2 pp 255ndash271 2010

[21] Y Tachwali W J Barnes F Basma and H H Refai ldquoThefeasibility of a fast fourier sampling technique for wirelessmicrophone detection in IEEE 80222 air interfacerdquo in Pro-ceedings of the IEEE Conference on Computer CommunicationsWorkshops (INFOCOM rsquo02) pp 1ndash5 IEEE San Diego CalifUSA March 2010

[22] J Meng W Yin H Li E Houssain and Z Han ldquoCollabora-tive spectrum sensing from sparse observations using matrixcompletion for cognitive radio networksrdquo in Proceedings of theIEEE International Conference on Acoustics Speech and SignalProcessing (ICASSP rsquo10) pp 3114ndash3117 Dallas Tex USA March2010

[23] S Pudlewski and T Melodia ldquoOn the performance of compres-sive video streaming for wireless multimedia sensor networksrdquoin Proceedings of the IEEE International Conference on Commu-nications (ICC rsquo10) pp 1ndash5 IEEECapeTown SouthAfricaMay2010

[24] A H Mohseni M Babaie-Zadeh and C Jutten ldquoInflatingcompressed samples a joint source-channel coding approachfor noise-resistant compressed sensingrdquo in Proceedings of theIEEE International Conference on Acoustics Speech and SignalProcessing (ICASSP rsquo09) pp 2957ndash2960 IEEE Taipei TaiwanApril 2009

[25] S J Benson Y Ye and X Zhang ldquoSolving large-scale sparsesemidefinite programs for combinatorial optimizationrdquo SIAMJournal on Optimization vol 10 no 2 pp 443ndash461 2000

[26] Z Lu Y Zhang and X Li ldquoPenalty decomposition methods forrank minimizationrdquo Optimization Methods and Software vol30 no 3 pp 531ndash558 2015

[27] P Jain R Meka and I S Dhillon ldquoGuaranteed rank mini-mization via singular value projectionrdquo in Advances in NeuralInformation Processing Systems pp 937ndash945 2010

[28] J P Haldar and D Hernando ldquoRank-constrained solutions tolinear matrix equations using powerfactorizationrdquo IEEE SignalProcessing Letters vol 16 no 7 pp 584ndash587 2009

[29] R H Keshavan A Montanari and S Oh ldquoMatrix completionfrom a few entriesrdquo IEEE Transactions on Information Theoryvol 56 no 6 pp 2980ndash2998 2010

[30] YNesterov ANemirovskii andY Ye Interior-Point PolynomialAlgorithms in Convex Programming Society for Industrial andApplied Mathematics Philadelphia Pa USA 1994

[31] M Weimin Matrix completion models with fixed basis coef-ficients and rank regularized problems with hard constraints[PhD thesis] National University of Singapore 2013

[32] M Fazel Matrix rank minimization with applications [PhDthesis] Stanford University 2002

[33] K M Grigoriadis and E B Beran ldquoAlternating projectionalgorithms for linear matrix inequalities problems with rankconstraintsrdquo in Advances in Linear Matrix Inequality Methodsin Control pp 512ndash267 Society for Industrial and AppliedMathematics Philadelphia Pa USA 1999

[34] Y Gao and D Sun ldquoA majorized penalty approach for cali-brating rank constrained correlation matrix problemsrdquo 2010httpwwwmathnusedusgsimmatsundfMajorPenpdf

[35] S Burer R D C Monteiro and Y Zhang ldquoMaximum stable setformulations and heuristics based on continuous optimizationrdquoMathematical Programming vol 94 no 1 pp 137ndash166 2002

[36] R BhatiaMatrix Analysis Springer Berlin Germany 1997[37] Z Lu ldquoSequential convex programming methods for a class

of structured nonlinear programmingrdquo httpsarxivorgabs12103039

[38] R A Horn and C R Johnson Matrix Analysis CambridgeUniversity Press Cambridge UK 1985

[39] B Recht M Fazel and P A Parrilo ldquoGuaranteed minimum-rank solutions of linear matrix equations via nuclear normminimizationrdquo SIAM Review vol 52 no 3 pp 471ndash501 2010

[40] httpwwwstanfordedusimyyyeyyyeGset[41] C Helmberg and F Rendl ldquoA spectral bundle method for

semidefinite programmingrdquo SIAM Journal on Optimizationvol 10 no 3 pp 673ndash696 2000

[42] M X Goemans and D P Williamson ldquoImproved approxima-tion algorithms for maximum cut and satisfiability problemsusing semidefinite programmingrdquo Journal of ACM vol 42 no6 pp 1115ndash1145 1995

[43] G A Kochenberger J-K Hao Z Lu H Wang and F GloverldquoSolving large scale max cut problems via tabu searchrdquo Journalof Heuristics vol 19 no 4 pp 565ndash571 2013

[44] G Palubeckis andVKrivickiene ldquoApplication ofmultistart tabusearch to the max-cut problemrdquo Information Technology andControl vol 31 no 2 pp 29ndash35 2004

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Mathematical Problems in Engineering 7

0 = 120597119891(119883) + 120588119883 minus 120588119880119882119880119879

119883 which implies that 119883 isin ΥBy setting119883 = 119883opt we have

lim119894rarrinfin

10038171003817100381710038171003817119883119894minus 119883

opt10038171003817100381710038171003817119865 =10038171003817100381710038171003817119883 minus 119883

opt10038171003817100381710038171003817119865 = 0 (45)

which completes the proof

Now we are ready to give the convergence of iterativesequence when solving (23) with general closed convex ΓcapΩ

Theorem 5 Let 120585 be the set of optimal solutions of problem(23) then the iterative sequence 119884119894 generated by stepwiselinear approximative algorithm converges to some 119884119900119901119905 isin 120585when 120588 rarr infin

Proof Generally speaking subproblem (26) can be solved byprojecting a solution obtained via solving a problem withno constraints onto a closed convex set that is 119884119894+1 =

119875ΓcapΩ(119883119894+1) where 119883119894+1 isin argmin119867(119883119883119894) According to

Theorem 4 the sequence 119883119894 minus 119883opt119865 is monotonically

nonincreasing and convergent when 120588 rarr infin together withthewell known fact that119875 is a nonexpansive operator we have

10038171003817100381710038171003817119884119894minus 119884

opt10038171003817100381710038171003817119865 =10038171003817100381710038171003817119875ΓcapΩ

(119883119894) minus 119875ΓcapΩ

(119883opt)10038171003817100381710038171003817119865

le10038171003817100381710038171003817119883119894minus 119883

opt10038171003817100381710038171003817119865 (46)

where119884opt is the projection of119883opt onto ΓcapΩ Clearly119884optisin

120585Hence 0 le lim

119894rarrinfin119884119894minus 119884

opt119865le lim119894rarrinfin

119883119894minus 119883

opt119865=

0 which implies that 119884119894 converge to 119884opt We immediatelyobtain the conclusion

3 Numerical Results

In this section we demonstrate the performance of ourmethod proposed in Section 2 by applying it to solve affinerank minimization problems (image reconstruction andmatrix completion) andmax-cut problemsThe codes imple-mented in this section are written in MATLAB and allexperiments are performed in MATLAB R2013a runningWindows 7

31 Image Reconstruction Problems In this subsection weapply our method to solve one case of affine rank minimiza-tion problems It can be formulated as

min 1

2A (119883) minus 119887

2

119865

st rank (119883) le 119896

119883 isin 119877119898times119899

(47)

where 119883 isin 119877119898times119899 is the decision variable and the linear mapA 119877

119898times119899rarr 119877119901 and vector 119887 isin 119877119901 are known Clearly (47)

is a special case of problem (3) Hence our method can besuitably applied to (47)

Recht et al [39] have demonstrated that when we samplelinearmaps froma class of probability distributions then they

0

50

100

150

200

250

300

SNR

(dB)

800 1000 1200 1400 1600 1800 2000 2200 2400600The number of linear constraints

GaussBinary

Sparse BinaryRandom Projection

Figure 1 Performance curves under four different measurementmatrices SNR versus the number of linear constraints

would obey the Restricted Isometry Property There are twoingredients for a random linear map to be nearly isometricFirst it must be isometric in expectation Second the prob-ability of large distortions of length must be exponentiallysmall In our numerical experiments we use four nearlyisometric measurements The first one is the ensemble withindependent identically distributed (iid) Gaussian entriesThe second one has entries sampled from an iid symmetricBernoulli distribution which we call BernoulliThe third onehas zeros in two-thirds of the entries and others sampledfrom an iid symmetric Bernoulli distribution which wecall Sparse Bernoulli The last one has an orthonormal basiswhich we call Random Projection

Now we conduct numerical experiments to test the per-formance of our method for solving (47) To illustrate thescaling of low rank recovery for a particular matrix 119883consider the MIT logo presented in [39] The image has 46rows and 81 columns (the total number of pixels is 46 times81 = 3726) Since the logo only has 5 distinct rows it hasrank 5 We sample it using Gaussian iid Bernoulli iidSparse Bernoulli iid and Random Projection measure-ments matrix with the number of linear constraints rangingbetween 700 and 2400 We declared MIT logo 119883 to berecovered if 119883 minus 119883opt

119865119883119865le 10minus3 that is SNR ge60 dB

(SNR = minus20 log(119883 minus 119883opt119865119883119865)) Figures 1 and 2 show

the results of our numerical experimentsFigure 1 plots the curves of SNR under four different

measurement matrices mentioned above with different num-bers of linear constraints The numerical values on the 119909-axisrepresent the number of linear constraints 119901 that is numberof measurements while those on 119910-axis represent the Signalto Noise Ration between the true original 119883 and 119883opt recov-ered by our method Noticeably Gauss and Sparse Binarycan successfully reconstruct MIT logo images from about1000 constraints while Random Projection requires about

8 Mathematical Problems in Engineering

800 900 1000 1100 1200 1300 1400 1500700The number of linear constraints

0

20

40

60

80

100

120

140SN

R (d

B)

GaussBinary

Sparse BinaryRandom Projection

Figure 2 Performance curves of nuclear norm heuristic methodunder four different measurementmatrices SNR versus the numberof linear constraints

1200 constraints Binary requires the minimal constraintsabout 950TheGauss and Sparse Binary measurements reachtheir maximal SNR of 220 dB at around 1500 constraintsthe Binary measurements can reach its maximal SNR of180 dB at around 1400 constraints Random Projection hasmore large phase transition point but has the best recoveryerror We observe a sharp transition to perfect recovery near1000 measurements lower than 2119896(119898 + 119899 minus 119896) Comparedwith Figure 2 which shows results of nuclear norm heuristicmethod in [39] and the method that is a kind of convexrelaxations there is a sharp transition to perfect recoverynear 1200 measurements which is approximately equal to2119896(119898 + 119899 minus 119896) The comparison shows our method hasbetter performance that it can recover an image with lessconstraints namely it needs less samples In addition weobserve that SNR of nuclear norm heuristic method achieves120 dB when the number of measurements approximatelyequals 1500 while ourmethod just needs 1200measurementswhich decrease by 20 compared with 1500 If the numberof measurements is 1500 accuracy of our method wouldbe higher Among precisions of four different measurementmatrixes the smallest one is precision of Random ProjectionHowever the precision value is also higher than 120 dB Theperformance illustrates that our method is more accurateObserving the curve of between 1200 and 1500 we can findthat our curve rises smoothly compared with the curve in thesame interval in Figure 2 this shows that ourmethod is stablefor reconstructing image

Figure 3 shows recovered images under four differentmeasurement matrices with 800 900 and 1000 linear con-straints And Figure 4 shows recovered images by ourmethodand that by SeDuMi We chose to use this interior-pointmethod because it yields the highest accuracy Distinctly therecovered images by SeDuMi have some shadow while oursis clear Moreover as far as time is concerned our methodis more fast and the computational time is not extremely

Table 1 DC versus SeDuMi in time (second)

119901 800 900 1000 1100 1200 1500DC 4682 5049 5078 5138 5196 5409SeDuMi 9326 12016 13824 15944 16176 24538

sensitive to119901 compared to SeDuMiThe reason is that closed-form solutions can be obtained when we solve this problemComparisons are shown in Table 1

32 Matrix Completion Problems In this subsection weapply our method to matrix completion problems It can bereformulated as

min 1

2

1003817100381710038171003817119875Ω (119883) minus 119875Ω (119872)1003817100381710038171003817

2

119865

st rank (119883) le 119896

119883 isin 119877119898times119899

(48)

where 119883 and 119872 are both 119898 times 119899 matrices and Ω is a subsetof index pairs (119894 119895) Up to now numerous methods wereproposed to solve the matrix completion problem (see eg[9 10 29 31]) It is easy to see that problem (48) is a specialcase of general rank minimization problem (3) with 119891(119883) =(12)119875

Ω(119883) minus 119875

Ω(119872)2

119865

In our experiments we first created random matrices119872119871isin 119877119898times119896 and 119872

119877isin 119877119898times119896 with iid Gaussian entries

and then set 119872 = 119872119871119872119879

119877 We sampled a subset Ω of 119901

entries uniformly at random We compare against singularvalue thresholding (SVT) method [9] FPC FPCA [10] andthe OptSpace (OPT) method [29] using

1003817100381710038171003817119875Ω (119883) minus 119875Ω (119872)1003817100381710038171003817119865

1003817100381710038171003817119875Ω (119872)1003817100381710038171003817119865

le 10minus3 (49)

as a stop criterion The maximum number of iterations is nomore than 1500 in all methods other parameters are defaultWe report result averaged over 10 runs

We use SR = 119901(119898119899) to denote sampling ratio andFR = 119896(119898 + 119899 minus 119896)119901 to represent the dimension of the setof rank 119896 matrices divided by the number of measurementsNS and AT denote number of matrices that are recoveredsuccessfully and average times (seconds) that are successfullysolved respectively RE represents the average relative error ofthe successful recovered matrices We use the relative error

1003817100381710038171003817119872 minus 119883lowast1003817100381710038171003817119865

119872119865 (50)

to estimate the closeness of119883lowast to119872 where119883lowast is the optimalsolution to (48) produced by our method We declared119872 tobe recovered if the relative error was less than 10minus3

Table 2 shows that our method has the close performancewith OptSpace in column of NS These performances arebetter than those of SVT FPC and FPCA However SVTFPC and FPCA cannot obtain the optimal matrix within1500 iterationswhich shows that ourmethodhas advantage infinding the neighborhood of optimal solutionThe results donot mean that these algorithms are invalid but themaximum

Mathematical Problems in Engineering 9

Table 2 Numerical results for problems with119898 = 119899 = 40 and SR = 05

Alg 119896 FR NS AT RE 119896 FR NS AT RADC

1 00988

10 002 123119864 minus 04

6 05550

9 012 464119864 minus 04

SVT 10 011 159119864 minus 04 0 mdash mdashFPC 9 013 652119864 minus 04 0 mdash mdashFPCA 10 003 398119864 minus 05 0 mdash mdashOptSpace 10 001 131119864 minus 04 10 021 405119864 minus 04

DC

2 01950

10 003 150119864 minus 04

7 06388

9 017 611119864 minus 04

SVT 9 018 183119864 minus 04 0 mdash mdashFPC 8 014 652119864 minus 04 0 mdash mdashFPCA 10 002 583119864 minus 05 0 mdash mdashOptSpace 10 001 151119864 minus 04 9 037 537119864 minus 04

DC

3 02888

10 005 268119864 minus 04

8 07200

7 028 790119864 minus 04

SVT 2 036 247119864 minus 04 0 mdash mdashFPC 5 025 858119864 minus 04 0 mdash mdashFPCA 10 006 138119864 minus 04 0 mdash mdashOptSpace 10 004 175119864 minus 04 9 082 668119864 minus 04

DC

4 03800

10 006 284119864 minus 04

9 07987

4 040 893119864 minus 04

SVT 1 062 138119864 minus 04 0 mdash mdashFPC 1 025 872119864 minus 04 0 mdash mdashFPCA 0 mdash mdash 0 mdash mdashOptSpace 10 005 268119864 minus 04 4 148 928119864 minus 04

DC

5 04688

10 008 382119864 minus 04

SVT 0 mdash mdashFPC 0 mdash mdashFPCA 0 mdash mdashOptSpace 10 009 316119864 minus 04

Table 3 Numerical results for problems with119898 = 119899 = 100 and SR = 02

Alg 119896 FR NS AT RE 119896 FR NS AT RADC

1 00995

10 030 191119864 minus 04

5 04875

10 150 505119864 minus 04

SVT 7 182 204119864 minus 04 0 mdash mdashFPC 10 314 806119864 minus 04 0 mdash mdashFPCA 10 012 173119864 minus 04 0 mdash mdashOptSpace 10 003 171119864 minus 04 10 058 557119864 minus 04

DC

2 01980

10 077 334119864 minus 04

6 05820

6 282 745119864 minus 04

SVT 0 mdash mdash 0 mdash mdashFPC 0 mdash mdash 0 mdash mdashFPCA 8 021 297119864 minus 04 0 mdash mdashOptSpace 10 005 227119864 minus 04 9 097 657119864 minus 04

DC

3 02955

10 064 303119864 minus 04

7 06755

5 324 766119864 minus 04

SVT 0 mdash mdash 0 mdash mdashFPC 0 mdash mdash 0 mdash mdashFPCA 6 009 220119864 minus 04 0 mdash mdashOptSpace 10 012 376119864 minus 04 5 189 795119864 minus 04

DC

4 03920

9 116 440119864 minus 04

SVT 0 mdash mdashFPC 0 mdash mdashFPCA 0 mdash mdashOptSpace 10 022 385119864 minus 04

10 Mathematical Problems in Engineering

Figure 3 Recovered images under four measurement matrices with different numbers of linear constraints From left to right the originalresults under Gauss Binary Sparse Binary and Random Projection measurement From top to bottom 119901 = 800 119901 = 900 and 119901 = 1000

Table 4 Numerical results for problems with119898 = 119899 = 100 and SR = 03

Alg 119896 FR NS AT RE 119896 FR NS AT RADC

1 00663

10 019 149E minus 04

7 04530

10 061 364E minus 04SVT 10 068 166E minus 04 0 mdash mdashFPC 10 133 447E minus 04 0 mdash mdashFPCA 10 017 612E minus 05 0 mdash mdashOptSpace 10 005 129E minus 04 10 047 339E minus 04DC

2 01320

10 020 169E minus 04

8 05120

10 087 470E minus 04SVT 8 139 166E minus 04 0 mdash mdashFPC 10 218 484E minus 04 0 mdash mdashFPCA 10 016 768E minus 05 0 mdash mdashOptSpace 10 004 148E minus 04 10 063 367E minus 04DC

3 01970

10 025 192E minus 04

9 05730

9 103 485E minus 04SVT 4 177 123E minus 04 0 mdash mdashFPC 9 432 582E minus 04 0 mdash mdashFPCA 9 019 121E minus 04 0 mdash mdashOptSpace 10 007 198E minus 04 10 140 487E minus 04DC

4 02613

10 032 230E minus 04

10 06333

8 136 621E minus 04SVT 4 982 467E minus 04 0 mdash mdashFPC 7 502 659E minus 04 0 mdash mdashFPCA 10 018 663E minus 04 0 mdash mdashOptSpace 10 009 208E minus 04 10 180 597E minus 04DC

5 03250

10 037 272E minus 04

11 06930

9 219 729E minus 04SVT 0 mdash mdash 0 mdash mdashFPC 1 818 734E minus 04 0 mdash mdashFPCA 10 010 220E minus 04 9 mdash mdashOptSpace 10 019 267E minus 04 5 411 705E minus 04DC

6 03880

10 047 300E minus 04

12 07520

6 283 879E minus 04SVT 0 mdash mdash 0 mdash mdashFPC 0 mdash mdash 0 mdash mdashFPCA 0 009 220E minus 04 0 mdash mdashOptSpace 10 025 325E minus 04 6 644 847E minus 04

number of iterations is limited to 1500 When the rank ofmatrix is large (ie 119896 ge 5) the average time of our algorithmis less than that of OptSpace In some cases the relative errorsof DC are minimal When the dimension of matrix becomeslargerDCmethodhas a better performance ofNS and similarrelative errors with that of OptSpace In terms of time ourtime is longer than that of OptSpace but shorter than thatof FPC and SVT Tables 3 and 4 show the above results In

conclusion our method has better performance in ldquohardrdquoproblems that is FR gt 034 The problems involved matricesthat are not of very low rank according to [10]

Table 5 gives the numerical results for problemswith largedimensions In these numerical experiments the maximumnumber of iterations is chosen by default of each comparedmethod We observe that our method can recover problemssuccessfully as other four methods while our method is

Mathematical Problems in Engineering 11

Figure 4 Recovered images under different numbers of linear con-straints From left to right the original results under our methodand results by SeDuMi From top to bottom 119901 = 800 119901 = 900 and119901 = 1000

Table 5 Numerical results for problems with different dimension

119898(119899) 119896 119901 SR FR Alg NS AT RA

100 10 5666 057 034

DC 10 014 191E minus 04SVT 10 281 185E minus 04FPC 10 523 183E minus 04FPCA 10 028 494E minus 05

OptSpace 10 09 141E minus 04

200 10 15665 039 025

DC 10 080 165E minus 04SVT 10 293 253E minus 04FPC 10 418 273E minus 04FPCA 10 008 688E minus 05

OptSpace 10 041 172E minus 04

300 10 27160 030 022

DC 10 272 167E minus 04SVT 10 257 179E minus 04FPC 10 98 163E minus 04FPCA 10 039 671E minus 05

OptSpace 10 185 150E minus 04

400 10 41269 026 019

DC 10 573 156E minus 04SVT 10 278 166E minus 04FPC 10 1328 130E minus 04FPCA 10 073 665E minus 05

OptSpace 10 347 146E minus 04

500 10 49471 020 02

DC 10 1363 173E minus 04SVT 10 294 174E minus 04FPC 10 2575 144E minus 04FPCA 10 112 107E minus 04

OptSpace 10 505 160E minus 04

1000 10 119406 012 017

DC 10 35641 170E minus 04SVT 10 456 166E minus 04FPC 10 11053 129E minus 04FPCA 10 516 164E minus 04

OptSpace 10 1461 150E minus 04

slower with the increase of dimension However our methodis faster than FPC Though SVT FPCA and OptSpace out-perform our method in terms of speed they are specifically

developedmethod formatrix completion problemswhile ourmethod can be applied to more class of problems To provethis we will apply our method to max-cut problems

33 Max-Cut Problems In this subsection we apply ourmethod tomax-cut problems It can be formulated as follows

min Tr (119871119883)st diag (119883) = 119890

rank (119883) = 1

119883 ⪰ 0

(51)

where119883 = 119909119909119879 119909 = (1199091 119909

119899)119879 119871 = (14)(diag(119882119890) minus119882)

119890 = (1 1 1)119879 and119882 is the weightmatrix Equation (51) is

one general case of problem (3) with Γ cap Ω = 119883 | diag(119883) =119890119883 isin 119878

+ Hence our method can be suitably applied to (51)

Wenowaddress the initialization and termination criteriafor our method when applied to (51) We set 1198830 = 119890119890119879 inaddition we choose the initial penalty parameter 120588 to be 001and set the parameter 120591 to be 11 And we use 119883119894+1minus119883119894

119865le 120576

as the termination criteria with 120598 = 10minus5In our numerical experiments we first test several small

scale max-cut problems then we perform our method onthe standard ldquo119866rdquo set problems written by Professor Ginaldi[40] In all we consider 10 problems ranging in size from800 variables to 1000 variables The results are shown inTables 6 and 7 in which ldquo119898rdquo and ldquo119896rdquo represent correspondingdimension of graph and the rank of solution respectivelyTheldquoobvrdquo denotes objective values of optimal matrix solutionsoperated by our method

In Table 6 ldquoKVrdquo denotes known optimal objective val-ues and ldquoUBrdquo presents upper bounds for max-cut by thesemidefinite relaxation It is easy to observe that our methodis effective for solving these problems and it has computed theglobal optimal solutions of all problems Moreover we obtainlower rank solution compared with semidefinite relaxationmethod

Table 7 shows the numerical results for standard ldquo119866rdquo setproblems ldquoSUBrdquo denotes upper bounds by spectral bundlemethod proposed by Helmberg and Rendl [41] ldquoGW-cutrdquodenotes the approximative algorithm proposed by Goemansand Williamson [42] ldquoDCrdquo denotes results generated byour method DC+ is the results of DC with neighborhoodapproximation algorithm and the right most column titledBK gives best results as we have known according to [43 44]We use ldquomdashrdquo to indicate that we do not obtain the rank FromTable 3 we can note that our optimal values are close tothe results of latest researches better than values given byGW-cut and they are all less than the corresponding upperbounds For the problems we all get the lower rank solutioncompared with GW-cut moreover most solutions satisfy theconstraints rank = 1 The results prove that our algorithm iseffective when used to solve max-cut problems

4 Concluding Remarks

In this paper we used closed-form of penalty function toreformulate convex constrained rankminimization problems

12 Mathematical Problems in Engineering

Table 6 Numerical results for famous max-cut problems

Graph 119898 KV GW-cut DC Optimal solutionsUB 119896 obv 119896

C5 5 4 4525 5 4 1 (minus1 1 minus1 1 minus1)K5 5 6 625 5 6 1 (minus1 1 minus1 1 minus1)AG 5 928 96040 5 928 1 (minus1 1 1 minus1 minus1)BG 12 88 903919 12 88 1 (minus1 1 1 minus1 minus1 minus1 minus1 minus1 minus1 1 1 1)

Table 7 Numerical results for G problems

Graph 119898 SUB GW-cut DC DC+ BKobv 119896 obv 119896

G01 800 12078 10612 14 11281 1 11518 11624G02 800 12084 10617 31 11261 1 11550 11620G03 800 12077 10610 14 11275 1 11540 11622G11 800 627 551 30 494 2 528 564G14 800 3187 2800 13 2806 1 2980 3063G15 800 3169 2785 70 2767 1 2969 3050G16 800 3172 2782 14 2800 1 2974 3052G43 1000 7027 6174 mdash 6321 1 6519 6660G45 1000 7020 6168 mdash 6348 1 6564 6654G52 1000 4009 3520 mdash 3383 1 3713 3849

as DC programming which could be solved by stepwiselinear approximative method Under some suitable assump-tions we showed that accumulation point of the sequencegenerated by the stepwise linear approximative method was astable point The numerical results on affine rank minimiza-tion and max-cut problems demonstrate that our method isgenerally comparable or superior to some existing methodsin terms of solution quality and it can be applied to a classof problems However the speed of our method needs to beimproved

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

The authors need to acknowledge the fund projectsWanpingYang and Jinkai Zhao are supported by Key Projects of Phi-losophy and Social Sciences Research (Ministry of EducationChina 15JZD012) Soft Science Fund of Shaanxi Province ofChina (2015KRM114) and the Fundamental Research Fundsfor the Central Universities Fengmin Xu is supported bythe Fundamental Research Funds for the Central Universitiesand China NSFC Projects (no 11101325)

References

[1] K-C Toh and S Yun ldquoAn accelerated proximal gradient algo-rithm for nuclear norm regularized linear least squares prob-lemsrdquo Pacific Journal of Optimization vol 6 no 3 pp 615ndash6402010

[2] Y Amit M Fink N Srebro and S Ullman ldquoUncovering sharedstructures in multiclass classificationrdquo in Proceedings of the 24thInternational Conference on Machine Learning (ICML rsquo07) pp17ndash24 ACM Corvallis Ore USA June 2007

[3] L El Ghaoui and P Gahinet ldquoRank minimization under LMIconstraints a framework for output feedback problemsrdquo in Pro-ceedings of the European Control Conference (ECC rsquo93) GahinetP Rank minimization under LMI constraints Groningen TheNetherlands June-July 1993

[4] M Fazel HHindi and S P Boyd ldquoA rankminimization heuris-tic with application to minimum order system approximationrdquoin Proceedings of the IEEE American Control Conference (ACCrsquo01) vol 6 pp 4734ndash4739 Arlington Va USA June 2001

[5] M W Trosset ldquoDistance matrix completion by numericaloptimizationrdquo Computational Optimization and Applicationsvol 17 no 1 pp 11ndash22 2000

[6] B Recht M Fazel and P A Parrilo ldquoGuaranteed minimum-rank solutions of linear matrix equations via nuclear normminimizationrdquo SIAM Review vol 52 no 3 pp 471ndash501 2010

[7] L Wu ldquoFast at-the-money calibration of the libor marketmodel through lagrange multipliersrdquo Journal of ComputationalFinance vol 6 no 2 pp 39ndash77 2003

[8] Z Zhang and L Wu ldquoOptimal low-rank approximation to acorrelationmatrixrdquo Linear Algebra and its Applications vol 364pp 161ndash187 2003

[9] J F Cai E J Candes andZ Shen ldquoA singular value thresholdingalgorithm for matrix completionrdquo SIAM Journal on Optimiza-tion vol 20 no 4 pp 1956ndash1982 2010

[10] S Ma D Goldfarb and L Chen ldquoFixed point and Bregmaniterative methods for matrix rank minimizationrdquoMathematicalProgramming vol 128 no 1-2 pp 321ndash353 2011

[11] K Barman and O Dabeer ldquoAnalysis of a collaborative filterbased on popularity amongst neighborsrdquo IEEE Transactions onInformation Theory vol 58 no 12 pp 7110ndash7134 2012

[12] S T Aditya O Dabeer and B K Dey ldquoA channel codingperspective of collaborative filteringrdquo IEEE Transactions onInformation Theory vol 57 no 4 pp 2327ndash2341 2011

[13] N E Zubov EAMikrinM SMisrikhanovVN RyabchenkoS N Timakov and E A Cheremnykh ldquoIdentification of theposition of an equilibrium attitude of the international spacestation as a problem of stable matrix completionrdquo Journal ofComputer and Systems Sciences International vol 51 no 2 pp291ndash305 2012

[14] T M Guerra and L Vermeiren ldquoLMI-based relaxed non-quadratic stabilization conditions for nonlinear systems in theTakagi-Sugenorsquos formrdquo Automatica vol 40 no 5 pp 823ndash8292004

[15] J J Meng W Yin H Li E Hossain and Z Han ldquoCollaborativespectrum sensing from sparse observations in cognitive radionetworksrdquo IEEE Journal on Selected Areas in Communicationsvol 29 no 2 pp 327ndash337 2011

Mathematical Problems in Engineering 13

[16] Z Q Dong S Anand and R Chandramouli ldquoEstimation ofmissing RTTs in computer networks matrix completion vscompressed sensingrdquo Computer Networks vol 55 no 15 pp3364ndash3375 2011

[17] JWright A Y Yang A Ganesh S S Sastry and YMa ldquoRobustface recognition via sparse representationrdquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 31 no 2 pp210ndash227 2009

[18] H Ji C Liu Z Shen and Y Xu ldquoRobust video denoisingusing Low rank matrix completionrdquo in Proceedings of the IEEEComputer Society Conference on Computer Vision and PatternRecognition (CVPR rsquo10) pp 1791ndash1798 San Francisco CalifUSA June 2010

[19] W U Bajwa J Haupt A M Sayeed and R Nowak ldquoCom-pressed channel sensing a new approach to estimating sparsemultipath channelsrdquo Proceedings of the IEEE vol 98 no 6 pp1058ndash1076 2010

[20] G Taubock F Hlawatsch D Eiwen and H Rauhut ldquoCom-pressive estimation of doubly selective channels in multicarriersystems leakage effects and sparsity-enhancing processingrdquoIEEE Journal on Selected Topics in Signal Processing vol 4 no2 pp 255ndash271 2010

[21] Y Tachwali W J Barnes F Basma and H H Refai ldquoThefeasibility of a fast fourier sampling technique for wirelessmicrophone detection in IEEE 80222 air interfacerdquo in Pro-ceedings of the IEEE Conference on Computer CommunicationsWorkshops (INFOCOM rsquo02) pp 1ndash5 IEEE San Diego CalifUSA March 2010

[22] J Meng W Yin H Li E Houssain and Z Han ldquoCollabora-tive spectrum sensing from sparse observations using matrixcompletion for cognitive radio networksrdquo in Proceedings of theIEEE International Conference on Acoustics Speech and SignalProcessing (ICASSP rsquo10) pp 3114ndash3117 Dallas Tex USA March2010

[23] S Pudlewski and T Melodia ldquoOn the performance of compres-sive video streaming for wireless multimedia sensor networksrdquoin Proceedings of the IEEE International Conference on Commu-nications (ICC rsquo10) pp 1ndash5 IEEECapeTown SouthAfricaMay2010

[24] A H Mohseni M Babaie-Zadeh and C Jutten ldquoInflatingcompressed samples a joint source-channel coding approachfor noise-resistant compressed sensingrdquo in Proceedings of theIEEE International Conference on Acoustics Speech and SignalProcessing (ICASSP rsquo09) pp 2957ndash2960 IEEE Taipei TaiwanApril 2009

[25] S J Benson Y Ye and X Zhang ldquoSolving large-scale sparsesemidefinite programs for combinatorial optimizationrdquo SIAMJournal on Optimization vol 10 no 2 pp 443ndash461 2000

[26] Z Lu Y Zhang and X Li ldquoPenalty decomposition methods forrank minimizationrdquo Optimization Methods and Software vol30 no 3 pp 531ndash558 2015

[27] P Jain R Meka and I S Dhillon ldquoGuaranteed rank mini-mization via singular value projectionrdquo in Advances in NeuralInformation Processing Systems pp 937ndash945 2010

[28] J P Haldar and D Hernando ldquoRank-constrained solutions tolinear matrix equations using powerfactorizationrdquo IEEE SignalProcessing Letters vol 16 no 7 pp 584ndash587 2009

[29] R H Keshavan A Montanari and S Oh ldquoMatrix completionfrom a few entriesrdquo IEEE Transactions on Information Theoryvol 56 no 6 pp 2980ndash2998 2010

[30] YNesterov ANemirovskii andY Ye Interior-Point PolynomialAlgorithms in Convex Programming Society for Industrial andApplied Mathematics Philadelphia Pa USA 1994

[31] M Weimin Matrix completion models with fixed basis coef-ficients and rank regularized problems with hard constraints[PhD thesis] National University of Singapore 2013

[32] M Fazel Matrix rank minimization with applications [PhDthesis] Stanford University 2002

[33] K M Grigoriadis and E B Beran ldquoAlternating projectionalgorithms for linear matrix inequalities problems with rankconstraintsrdquo in Advances in Linear Matrix Inequality Methodsin Control pp 512ndash267 Society for Industrial and AppliedMathematics Philadelphia Pa USA 1999

[34] Y Gao and D Sun ldquoA majorized penalty approach for cali-brating rank constrained correlation matrix problemsrdquo 2010httpwwwmathnusedusgsimmatsundfMajorPenpdf

[35] S Burer R D C Monteiro and Y Zhang ldquoMaximum stable setformulations and heuristics based on continuous optimizationrdquoMathematical Programming vol 94 no 1 pp 137ndash166 2002

[36] R BhatiaMatrix Analysis Springer Berlin Germany 1997[37] Z Lu ldquoSequential convex programming methods for a class

of structured nonlinear programmingrdquo httpsarxivorgabs12103039

[38] R A Horn and C R Johnson Matrix Analysis CambridgeUniversity Press Cambridge UK 1985

[39] B Recht M Fazel and P A Parrilo ldquoGuaranteed minimum-rank solutions of linear matrix equations via nuclear normminimizationrdquo SIAM Review vol 52 no 3 pp 471ndash501 2010

[40] httpwwwstanfordedusimyyyeyyyeGset[41] C Helmberg and F Rendl ldquoA spectral bundle method for

semidefinite programmingrdquo SIAM Journal on Optimizationvol 10 no 3 pp 673ndash696 2000

[42] M X Goemans and D P Williamson ldquoImproved approxima-tion algorithms for maximum cut and satisfiability problemsusing semidefinite programmingrdquo Journal of ACM vol 42 no6 pp 1115ndash1145 1995

[43] G A Kochenberger J-K Hao Z Lu H Wang and F GloverldquoSolving large scale max cut problems via tabu searchrdquo Journalof Heuristics vol 19 no 4 pp 565ndash571 2013

[44] G Palubeckis andVKrivickiene ldquoApplication ofmultistart tabusearch to the max-cut problemrdquo Information Technology andControl vol 31 no 2 pp 29ndash35 2004

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

8 Mathematical Problems in Engineering

800 900 1000 1100 1200 1300 1400 1500700The number of linear constraints

0

20

40

60

80

100

120

140SN

R (d

B)

GaussBinary

Sparse BinaryRandom Projection

Figure 2 Performance curves of nuclear norm heuristic methodunder four different measurementmatrices SNR versus the numberof linear constraints

1200 constraints Binary requires the minimal constraintsabout 950TheGauss and Sparse Binary measurements reachtheir maximal SNR of 220 dB at around 1500 constraintsthe Binary measurements can reach its maximal SNR of180 dB at around 1400 constraints Random Projection hasmore large phase transition point but has the best recoveryerror We observe a sharp transition to perfect recovery near1000 measurements lower than 2119896(119898 + 119899 minus 119896) Comparedwith Figure 2 which shows results of nuclear norm heuristicmethod in [39] and the method that is a kind of convexrelaxations there is a sharp transition to perfect recoverynear 1200 measurements which is approximately equal to2119896(119898 + 119899 minus 119896) The comparison shows our method hasbetter performance that it can recover an image with lessconstraints namely it needs less samples In addition weobserve that SNR of nuclear norm heuristic method achieves120 dB when the number of measurements approximatelyequals 1500 while ourmethod just needs 1200measurementswhich decrease by 20 compared with 1500 If the numberof measurements is 1500 accuracy of our method wouldbe higher Among precisions of four different measurementmatrixes the smallest one is precision of Random ProjectionHowever the precision value is also higher than 120 dB Theperformance illustrates that our method is more accurateObserving the curve of between 1200 and 1500 we can findthat our curve rises smoothly compared with the curve in thesame interval in Figure 2 this shows that ourmethod is stablefor reconstructing image

Figure 3 shows recovered images under four differentmeasurement matrices with 800 900 and 1000 linear con-straints And Figure 4 shows recovered images by ourmethodand that by SeDuMi We chose to use this interior-pointmethod because it yields the highest accuracy Distinctly therecovered images by SeDuMi have some shadow while oursis clear Moreover as far as time is concerned our methodis more fast and the computational time is not extremely

Table 1 DC versus SeDuMi in time (second)

119901 800 900 1000 1100 1200 1500DC 4682 5049 5078 5138 5196 5409SeDuMi 9326 12016 13824 15944 16176 24538

sensitive to119901 compared to SeDuMiThe reason is that closed-form solutions can be obtained when we solve this problemComparisons are shown in Table 1

32 Matrix Completion Problems In this subsection weapply our method to matrix completion problems It can bereformulated as

min 1

2

1003817100381710038171003817119875Ω (119883) minus 119875Ω (119872)1003817100381710038171003817

2

119865

st rank (119883) le 119896

119883 isin 119877119898times119899

(48)

where 119883 and 119872 are both 119898 times 119899 matrices and Ω is a subsetof index pairs (119894 119895) Up to now numerous methods wereproposed to solve the matrix completion problem (see eg[9 10 29 31]) It is easy to see that problem (48) is a specialcase of general rank minimization problem (3) with 119891(119883) =(12)119875

Ω(119883) minus 119875

Ω(119872)2

119865

In our experiments we first created random matrices119872119871isin 119877119898times119896 and 119872

119877isin 119877119898times119896 with iid Gaussian entries

and then set 119872 = 119872119871119872119879

119877 We sampled a subset Ω of 119901

entries uniformly at random We compare against singularvalue thresholding (SVT) method [9] FPC FPCA [10] andthe OptSpace (OPT) method [29] using

1003817100381710038171003817119875Ω (119883) minus 119875Ω (119872)1003817100381710038171003817119865

1003817100381710038171003817119875Ω (119872)1003817100381710038171003817119865

le 10minus3 (49)

as a stop criterion The maximum number of iterations is nomore than 1500 in all methods other parameters are defaultWe report result averaged over 10 runs

We use SR = 119901(119898119899) to denote sampling ratio andFR = 119896(119898 + 119899 minus 119896)119901 to represent the dimension of the setof rank 119896 matrices divided by the number of measurementsNS and AT denote number of matrices that are recoveredsuccessfully and average times (seconds) that are successfullysolved respectively RE represents the average relative error ofthe successful recovered matrices We use the relative error

1003817100381710038171003817119872 minus 119883lowast1003817100381710038171003817119865

119872119865 (50)

to estimate the closeness of119883lowast to119872 where119883lowast is the optimalsolution to (48) produced by our method We declared119872 tobe recovered if the relative error was less than 10minus3

Table 2 shows that our method has the close performancewith OptSpace in column of NS These performances arebetter than those of SVT FPC and FPCA However SVTFPC and FPCA cannot obtain the optimal matrix within1500 iterationswhich shows that ourmethodhas advantage infinding the neighborhood of optimal solutionThe results donot mean that these algorithms are invalid but themaximum

Mathematical Problems in Engineering 9

Table 2 Numerical results for problems with119898 = 119899 = 40 and SR = 05

Alg 119896 FR NS AT RE 119896 FR NS AT RADC

1 00988

10 002 123119864 minus 04

6 05550

9 012 464119864 minus 04

SVT 10 011 159119864 minus 04 0 mdash mdashFPC 9 013 652119864 minus 04 0 mdash mdashFPCA 10 003 398119864 minus 05 0 mdash mdashOptSpace 10 001 131119864 minus 04 10 021 405119864 minus 04

DC

2 01950

10 003 150119864 minus 04

7 06388

9 017 611119864 minus 04

SVT 9 018 183119864 minus 04 0 mdash mdashFPC 8 014 652119864 minus 04 0 mdash mdashFPCA 10 002 583119864 minus 05 0 mdash mdashOptSpace 10 001 151119864 minus 04 9 037 537119864 minus 04

DC

3 02888

10 005 268119864 minus 04

8 07200

7 028 790119864 minus 04

SVT 2 036 247119864 minus 04 0 mdash mdashFPC 5 025 858119864 minus 04 0 mdash mdashFPCA 10 006 138119864 minus 04 0 mdash mdashOptSpace 10 004 175119864 minus 04 9 082 668119864 minus 04

DC

4 03800

10 006 284119864 minus 04

9 07987

4 040 893119864 minus 04

SVT 1 062 138119864 minus 04 0 mdash mdashFPC 1 025 872119864 minus 04 0 mdash mdashFPCA 0 mdash mdash 0 mdash mdashOptSpace 10 005 268119864 minus 04 4 148 928119864 minus 04

DC

5 04688

10 008 382119864 minus 04

SVT 0 mdash mdashFPC 0 mdash mdashFPCA 0 mdash mdashOptSpace 10 009 316119864 minus 04

Table 3 Numerical results for problems with119898 = 119899 = 100 and SR = 02

Alg 119896 FR NS AT RE 119896 FR NS AT RADC

1 00995

10 030 191119864 minus 04

5 04875

10 150 505119864 minus 04

SVT 7 182 204119864 minus 04 0 mdash mdashFPC 10 314 806119864 minus 04 0 mdash mdashFPCA 10 012 173119864 minus 04 0 mdash mdashOptSpace 10 003 171119864 minus 04 10 058 557119864 minus 04

DC

2 01980

10 077 334119864 minus 04

6 05820

6 282 745119864 minus 04

SVT 0 mdash mdash 0 mdash mdashFPC 0 mdash mdash 0 mdash mdashFPCA 8 021 297119864 minus 04 0 mdash mdashOptSpace 10 005 227119864 minus 04 9 097 657119864 minus 04

DC

3 02955

10 064 303119864 minus 04

7 06755

5 324 766119864 minus 04

SVT 0 mdash mdash 0 mdash mdashFPC 0 mdash mdash 0 mdash mdashFPCA 6 009 220119864 minus 04 0 mdash mdashOptSpace 10 012 376119864 minus 04 5 189 795119864 minus 04

DC

4 03920

9 116 440119864 minus 04

SVT 0 mdash mdashFPC 0 mdash mdashFPCA 0 mdash mdashOptSpace 10 022 385119864 minus 04

10 Mathematical Problems in Engineering

Figure 3 Recovered images under four measurement matrices with different numbers of linear constraints From left to right the originalresults under Gauss Binary Sparse Binary and Random Projection measurement From top to bottom 119901 = 800 119901 = 900 and 119901 = 1000

Table 4 Numerical results for problems with119898 = 119899 = 100 and SR = 03

Alg 119896 FR NS AT RE 119896 FR NS AT RADC

1 00663

10 019 149E minus 04

7 04530

10 061 364E minus 04SVT 10 068 166E minus 04 0 mdash mdashFPC 10 133 447E minus 04 0 mdash mdashFPCA 10 017 612E minus 05 0 mdash mdashOptSpace 10 005 129E minus 04 10 047 339E minus 04DC

2 01320

10 020 169E minus 04

8 05120

10 087 470E minus 04SVT 8 139 166E minus 04 0 mdash mdashFPC 10 218 484E minus 04 0 mdash mdashFPCA 10 016 768E minus 05 0 mdash mdashOptSpace 10 004 148E minus 04 10 063 367E minus 04DC

3 01970

10 025 192E minus 04

9 05730

9 103 485E minus 04SVT 4 177 123E minus 04 0 mdash mdashFPC 9 432 582E minus 04 0 mdash mdashFPCA 9 019 121E minus 04 0 mdash mdashOptSpace 10 007 198E minus 04 10 140 487E minus 04DC

4 02613

10 032 230E minus 04

10 06333

8 136 621E minus 04SVT 4 982 467E minus 04 0 mdash mdashFPC 7 502 659E minus 04 0 mdash mdashFPCA 10 018 663E minus 04 0 mdash mdashOptSpace 10 009 208E minus 04 10 180 597E minus 04DC

5 03250

10 037 272E minus 04

11 06930

9 219 729E minus 04SVT 0 mdash mdash 0 mdash mdashFPC 1 818 734E minus 04 0 mdash mdashFPCA 10 010 220E minus 04 9 mdash mdashOptSpace 10 019 267E minus 04 5 411 705E minus 04DC

6 03880

10 047 300E minus 04

12 07520

6 283 879E minus 04SVT 0 mdash mdash 0 mdash mdashFPC 0 mdash mdash 0 mdash mdashFPCA 0 009 220E minus 04 0 mdash mdashOptSpace 10 025 325E minus 04 6 644 847E minus 04

number of iterations is limited to 1500 When the rank ofmatrix is large (ie 119896 ge 5) the average time of our algorithmis less than that of OptSpace In some cases the relative errorsof DC are minimal When the dimension of matrix becomeslargerDCmethodhas a better performance ofNS and similarrelative errors with that of OptSpace In terms of time ourtime is longer than that of OptSpace but shorter than thatof FPC and SVT Tables 3 and 4 show the above results In

conclusion our method has better performance in ldquohardrdquoproblems that is FR gt 034 The problems involved matricesthat are not of very low rank according to [10]

Table 5 gives the numerical results for problemswith largedimensions In these numerical experiments the maximumnumber of iterations is chosen by default of each comparedmethod We observe that our method can recover problemssuccessfully as other four methods while our method is

Mathematical Problems in Engineering 11

Figure 4 Recovered images under different numbers of linear con-straints From left to right the original results under our methodand results by SeDuMi From top to bottom 119901 = 800 119901 = 900 and119901 = 1000

Table 5 Numerical results for problems with different dimension

119898(119899) 119896 119901 SR FR Alg NS AT RA

100 10 5666 057 034

DC 10 014 191E minus 04SVT 10 281 185E minus 04FPC 10 523 183E minus 04FPCA 10 028 494E minus 05

OptSpace 10 09 141E minus 04

200 10 15665 039 025

DC 10 080 165E minus 04SVT 10 293 253E minus 04FPC 10 418 273E minus 04FPCA 10 008 688E minus 05

OptSpace 10 041 172E minus 04

300 10 27160 030 022

DC 10 272 167E minus 04SVT 10 257 179E minus 04FPC 10 98 163E minus 04FPCA 10 039 671E minus 05

OptSpace 10 185 150E minus 04

400 10 41269 026 019

DC 10 573 156E minus 04SVT 10 278 166E minus 04FPC 10 1328 130E minus 04FPCA 10 073 665E minus 05

OptSpace 10 347 146E minus 04

500 10 49471 020 02

DC 10 1363 173E minus 04SVT 10 294 174E minus 04FPC 10 2575 144E minus 04FPCA 10 112 107E minus 04

OptSpace 10 505 160E minus 04

1000 10 119406 012 017

DC 10 35641 170E minus 04SVT 10 456 166E minus 04FPC 10 11053 129E minus 04FPCA 10 516 164E minus 04

OptSpace 10 1461 150E minus 04

slower with the increase of dimension However our methodis faster than FPC Though SVT FPCA and OptSpace out-perform our method in terms of speed they are specifically

developedmethod formatrix completion problemswhile ourmethod can be applied to more class of problems To provethis we will apply our method to max-cut problems

33 Max-Cut Problems In this subsection we apply ourmethod tomax-cut problems It can be formulated as follows

min Tr (119871119883)st diag (119883) = 119890

rank (119883) = 1

119883 ⪰ 0

(51)

where119883 = 119909119909119879 119909 = (1199091 119909

119899)119879 119871 = (14)(diag(119882119890) minus119882)

119890 = (1 1 1)119879 and119882 is the weightmatrix Equation (51) is

one general case of problem (3) with Γ cap Ω = 119883 | diag(119883) =119890119883 isin 119878

+ Hence our method can be suitably applied to (51)

Wenowaddress the initialization and termination criteriafor our method when applied to (51) We set 1198830 = 119890119890119879 inaddition we choose the initial penalty parameter 120588 to be 001and set the parameter 120591 to be 11 And we use 119883119894+1minus119883119894

119865le 120576

as the termination criteria with 120598 = 10minus5In our numerical experiments we first test several small

scale max-cut problems then we perform our method onthe standard ldquo119866rdquo set problems written by Professor Ginaldi[40] In all we consider 10 problems ranging in size from800 variables to 1000 variables The results are shown inTables 6 and 7 in which ldquo119898rdquo and ldquo119896rdquo represent correspondingdimension of graph and the rank of solution respectivelyTheldquoobvrdquo denotes objective values of optimal matrix solutionsoperated by our method

In Table 6 ldquoKVrdquo denotes known optimal objective val-ues and ldquoUBrdquo presents upper bounds for max-cut by thesemidefinite relaxation It is easy to observe that our methodis effective for solving these problems and it has computed theglobal optimal solutions of all problems Moreover we obtainlower rank solution compared with semidefinite relaxationmethod

Table 7 shows the numerical results for standard ldquo119866rdquo setproblems ldquoSUBrdquo denotes upper bounds by spectral bundlemethod proposed by Helmberg and Rendl [41] ldquoGW-cutrdquodenotes the approximative algorithm proposed by Goemansand Williamson [42] ldquoDCrdquo denotes results generated byour method DC+ is the results of DC with neighborhoodapproximation algorithm and the right most column titledBK gives best results as we have known according to [43 44]We use ldquomdashrdquo to indicate that we do not obtain the rank FromTable 3 we can note that our optimal values are close tothe results of latest researches better than values given byGW-cut and they are all less than the corresponding upperbounds For the problems we all get the lower rank solutioncompared with GW-cut moreover most solutions satisfy theconstraints rank = 1 The results prove that our algorithm iseffective when used to solve max-cut problems

4 Concluding Remarks

In this paper we used closed-form of penalty function toreformulate convex constrained rankminimization problems

12 Mathematical Problems in Engineering

Table 6 Numerical results for famous max-cut problems

Graph 119898 KV GW-cut DC Optimal solutionsUB 119896 obv 119896

C5 5 4 4525 5 4 1 (minus1 1 minus1 1 minus1)K5 5 6 625 5 6 1 (minus1 1 minus1 1 minus1)AG 5 928 96040 5 928 1 (minus1 1 1 minus1 minus1)BG 12 88 903919 12 88 1 (minus1 1 1 minus1 minus1 minus1 minus1 minus1 minus1 1 1 1)

Table 7 Numerical results for G problems

Graph 119898 SUB GW-cut DC DC+ BKobv 119896 obv 119896

G01 800 12078 10612 14 11281 1 11518 11624G02 800 12084 10617 31 11261 1 11550 11620G03 800 12077 10610 14 11275 1 11540 11622G11 800 627 551 30 494 2 528 564G14 800 3187 2800 13 2806 1 2980 3063G15 800 3169 2785 70 2767 1 2969 3050G16 800 3172 2782 14 2800 1 2974 3052G43 1000 7027 6174 mdash 6321 1 6519 6660G45 1000 7020 6168 mdash 6348 1 6564 6654G52 1000 4009 3520 mdash 3383 1 3713 3849

as DC programming which could be solved by stepwiselinear approximative method Under some suitable assump-tions we showed that accumulation point of the sequencegenerated by the stepwise linear approximative method was astable point The numerical results on affine rank minimiza-tion and max-cut problems demonstrate that our method isgenerally comparable or superior to some existing methodsin terms of solution quality and it can be applied to a classof problems However the speed of our method needs to beimproved

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

The authors need to acknowledge the fund projectsWanpingYang and Jinkai Zhao are supported by Key Projects of Phi-losophy and Social Sciences Research (Ministry of EducationChina 15JZD012) Soft Science Fund of Shaanxi Province ofChina (2015KRM114) and the Fundamental Research Fundsfor the Central Universities Fengmin Xu is supported bythe Fundamental Research Funds for the Central Universitiesand China NSFC Projects (no 11101325)

References

[1] K-C Toh and S Yun ldquoAn accelerated proximal gradient algo-rithm for nuclear norm regularized linear least squares prob-lemsrdquo Pacific Journal of Optimization vol 6 no 3 pp 615ndash6402010

[2] Y Amit M Fink N Srebro and S Ullman ldquoUncovering sharedstructures in multiclass classificationrdquo in Proceedings of the 24thInternational Conference on Machine Learning (ICML rsquo07) pp17ndash24 ACM Corvallis Ore USA June 2007

[3] L El Ghaoui and P Gahinet ldquoRank minimization under LMIconstraints a framework for output feedback problemsrdquo in Pro-ceedings of the European Control Conference (ECC rsquo93) GahinetP Rank minimization under LMI constraints Groningen TheNetherlands June-July 1993

[4] M Fazel HHindi and S P Boyd ldquoA rankminimization heuris-tic with application to minimum order system approximationrdquoin Proceedings of the IEEE American Control Conference (ACCrsquo01) vol 6 pp 4734ndash4739 Arlington Va USA June 2001

[5] M W Trosset ldquoDistance matrix completion by numericaloptimizationrdquo Computational Optimization and Applicationsvol 17 no 1 pp 11ndash22 2000

[6] B Recht M Fazel and P A Parrilo ldquoGuaranteed minimum-rank solutions of linear matrix equations via nuclear normminimizationrdquo SIAM Review vol 52 no 3 pp 471ndash501 2010

[7] L Wu ldquoFast at-the-money calibration of the libor marketmodel through lagrange multipliersrdquo Journal of ComputationalFinance vol 6 no 2 pp 39ndash77 2003

[8] Z Zhang and L Wu ldquoOptimal low-rank approximation to acorrelationmatrixrdquo Linear Algebra and its Applications vol 364pp 161ndash187 2003

[9] J F Cai E J Candes andZ Shen ldquoA singular value thresholdingalgorithm for matrix completionrdquo SIAM Journal on Optimiza-tion vol 20 no 4 pp 1956ndash1982 2010

[10] S Ma D Goldfarb and L Chen ldquoFixed point and Bregmaniterative methods for matrix rank minimizationrdquoMathematicalProgramming vol 128 no 1-2 pp 321ndash353 2011

[11] K Barman and O Dabeer ldquoAnalysis of a collaborative filterbased on popularity amongst neighborsrdquo IEEE Transactions onInformation Theory vol 58 no 12 pp 7110ndash7134 2012

[12] S T Aditya O Dabeer and B K Dey ldquoA channel codingperspective of collaborative filteringrdquo IEEE Transactions onInformation Theory vol 57 no 4 pp 2327ndash2341 2011

[13] N E Zubov EAMikrinM SMisrikhanovVN RyabchenkoS N Timakov and E A Cheremnykh ldquoIdentification of theposition of an equilibrium attitude of the international spacestation as a problem of stable matrix completionrdquo Journal ofComputer and Systems Sciences International vol 51 no 2 pp291ndash305 2012

[14] T M Guerra and L Vermeiren ldquoLMI-based relaxed non-quadratic stabilization conditions for nonlinear systems in theTakagi-Sugenorsquos formrdquo Automatica vol 40 no 5 pp 823ndash8292004

[15] J J Meng W Yin H Li E Hossain and Z Han ldquoCollaborativespectrum sensing from sparse observations in cognitive radionetworksrdquo IEEE Journal on Selected Areas in Communicationsvol 29 no 2 pp 327ndash337 2011

Mathematical Problems in Engineering 13

[16] Z Q Dong S Anand and R Chandramouli ldquoEstimation ofmissing RTTs in computer networks matrix completion vscompressed sensingrdquo Computer Networks vol 55 no 15 pp3364ndash3375 2011

[17] JWright A Y Yang A Ganesh S S Sastry and YMa ldquoRobustface recognition via sparse representationrdquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 31 no 2 pp210ndash227 2009

[18] H Ji C Liu Z Shen and Y Xu ldquoRobust video denoisingusing Low rank matrix completionrdquo in Proceedings of the IEEEComputer Society Conference on Computer Vision and PatternRecognition (CVPR rsquo10) pp 1791ndash1798 San Francisco CalifUSA June 2010

[19] W U Bajwa J Haupt A M Sayeed and R Nowak ldquoCom-pressed channel sensing a new approach to estimating sparsemultipath channelsrdquo Proceedings of the IEEE vol 98 no 6 pp1058ndash1076 2010

[20] G Taubock F Hlawatsch D Eiwen and H Rauhut ldquoCom-pressive estimation of doubly selective channels in multicarriersystems leakage effects and sparsity-enhancing processingrdquoIEEE Journal on Selected Topics in Signal Processing vol 4 no2 pp 255ndash271 2010

[21] Y Tachwali W J Barnes F Basma and H H Refai ldquoThefeasibility of a fast fourier sampling technique for wirelessmicrophone detection in IEEE 80222 air interfacerdquo in Pro-ceedings of the IEEE Conference on Computer CommunicationsWorkshops (INFOCOM rsquo02) pp 1ndash5 IEEE San Diego CalifUSA March 2010

[22] J Meng W Yin H Li E Houssain and Z Han ldquoCollabora-tive spectrum sensing from sparse observations using matrixcompletion for cognitive radio networksrdquo in Proceedings of theIEEE International Conference on Acoustics Speech and SignalProcessing (ICASSP rsquo10) pp 3114ndash3117 Dallas Tex USA March2010

[23] S Pudlewski and T Melodia ldquoOn the performance of compres-sive video streaming for wireless multimedia sensor networksrdquoin Proceedings of the IEEE International Conference on Commu-nications (ICC rsquo10) pp 1ndash5 IEEECapeTown SouthAfricaMay2010

[24] A H Mohseni M Babaie-Zadeh and C Jutten ldquoInflatingcompressed samples a joint source-channel coding approachfor noise-resistant compressed sensingrdquo in Proceedings of theIEEE International Conference on Acoustics Speech and SignalProcessing (ICASSP rsquo09) pp 2957ndash2960 IEEE Taipei TaiwanApril 2009

[25] S J Benson Y Ye and X Zhang ldquoSolving large-scale sparsesemidefinite programs for combinatorial optimizationrdquo SIAMJournal on Optimization vol 10 no 2 pp 443ndash461 2000

[26] Z Lu Y Zhang and X Li ldquoPenalty decomposition methods forrank minimizationrdquo Optimization Methods and Software vol30 no 3 pp 531ndash558 2015

[27] P Jain R Meka and I S Dhillon ldquoGuaranteed rank mini-mization via singular value projectionrdquo in Advances in NeuralInformation Processing Systems pp 937ndash945 2010

[28] J P Haldar and D Hernando ldquoRank-constrained solutions tolinear matrix equations using powerfactorizationrdquo IEEE SignalProcessing Letters vol 16 no 7 pp 584ndash587 2009

[29] R H Keshavan A Montanari and S Oh ldquoMatrix completionfrom a few entriesrdquo IEEE Transactions on Information Theoryvol 56 no 6 pp 2980ndash2998 2010

[30] YNesterov ANemirovskii andY Ye Interior-Point PolynomialAlgorithms in Convex Programming Society for Industrial andApplied Mathematics Philadelphia Pa USA 1994

[31] M Weimin Matrix completion models with fixed basis coef-ficients and rank regularized problems with hard constraints[PhD thesis] National University of Singapore 2013

[32] M Fazel Matrix rank minimization with applications [PhDthesis] Stanford University 2002

[33] K M Grigoriadis and E B Beran ldquoAlternating projectionalgorithms for linear matrix inequalities problems with rankconstraintsrdquo in Advances in Linear Matrix Inequality Methodsin Control pp 512ndash267 Society for Industrial and AppliedMathematics Philadelphia Pa USA 1999

[34] Y Gao and D Sun ldquoA majorized penalty approach for cali-brating rank constrained correlation matrix problemsrdquo 2010httpwwwmathnusedusgsimmatsundfMajorPenpdf

[35] S Burer R D C Monteiro and Y Zhang ldquoMaximum stable setformulations and heuristics based on continuous optimizationrdquoMathematical Programming vol 94 no 1 pp 137ndash166 2002

[36] R BhatiaMatrix Analysis Springer Berlin Germany 1997[37] Z Lu ldquoSequential convex programming methods for a class

of structured nonlinear programmingrdquo httpsarxivorgabs12103039

[38] R A Horn and C R Johnson Matrix Analysis CambridgeUniversity Press Cambridge UK 1985

[39] B Recht M Fazel and P A Parrilo ldquoGuaranteed minimum-rank solutions of linear matrix equations via nuclear normminimizationrdquo SIAM Review vol 52 no 3 pp 471ndash501 2010

[40] httpwwwstanfordedusimyyyeyyyeGset[41] C Helmberg and F Rendl ldquoA spectral bundle method for

semidefinite programmingrdquo SIAM Journal on Optimizationvol 10 no 3 pp 673ndash696 2000

[42] M X Goemans and D P Williamson ldquoImproved approxima-tion algorithms for maximum cut and satisfiability problemsusing semidefinite programmingrdquo Journal of ACM vol 42 no6 pp 1115ndash1145 1995

[43] G A Kochenberger J-K Hao Z Lu H Wang and F GloverldquoSolving large scale max cut problems via tabu searchrdquo Journalof Heuristics vol 19 no 4 pp 565ndash571 2013

[44] G Palubeckis andVKrivickiene ldquoApplication ofmultistart tabusearch to the max-cut problemrdquo Information Technology andControl vol 31 no 2 pp 29ndash35 2004

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Mathematical Problems in Engineering 9

Table 2 Numerical results for problems with119898 = 119899 = 40 and SR = 05

Alg 119896 FR NS AT RE 119896 FR NS AT RADC

1 00988

10 002 123119864 minus 04

6 05550

9 012 464119864 minus 04

SVT 10 011 159119864 minus 04 0 mdash mdashFPC 9 013 652119864 minus 04 0 mdash mdashFPCA 10 003 398119864 minus 05 0 mdash mdashOptSpace 10 001 131119864 minus 04 10 021 405119864 minus 04

DC

2 01950

10 003 150119864 minus 04

7 06388

9 017 611119864 minus 04

SVT 9 018 183119864 minus 04 0 mdash mdashFPC 8 014 652119864 minus 04 0 mdash mdashFPCA 10 002 583119864 minus 05 0 mdash mdashOptSpace 10 001 151119864 minus 04 9 037 537119864 minus 04

DC

3 02888

10 005 268119864 minus 04

8 07200

7 028 790119864 minus 04

SVT 2 036 247119864 minus 04 0 mdash mdashFPC 5 025 858119864 minus 04 0 mdash mdashFPCA 10 006 138119864 minus 04 0 mdash mdashOptSpace 10 004 175119864 minus 04 9 082 668119864 minus 04

DC

4 03800

10 006 284119864 minus 04

9 07987

4 040 893119864 minus 04

SVT 1 062 138119864 minus 04 0 mdash mdashFPC 1 025 872119864 minus 04 0 mdash mdashFPCA 0 mdash mdash 0 mdash mdashOptSpace 10 005 268119864 minus 04 4 148 928119864 minus 04

DC

5 04688

10 008 382119864 minus 04

SVT 0 mdash mdashFPC 0 mdash mdashFPCA 0 mdash mdashOptSpace 10 009 316119864 minus 04

Table 3 Numerical results for problems with119898 = 119899 = 100 and SR = 02

Alg 119896 FR NS AT RE 119896 FR NS AT RADC

1 00995

10 030 191119864 minus 04

5 04875

10 150 505119864 minus 04

SVT 7 182 204119864 minus 04 0 mdash mdashFPC 10 314 806119864 minus 04 0 mdash mdashFPCA 10 012 173119864 minus 04 0 mdash mdashOptSpace 10 003 171119864 minus 04 10 058 557119864 minus 04

DC

2 01980

10 077 334119864 minus 04

6 05820

6 282 745119864 minus 04

SVT 0 mdash mdash 0 mdash mdashFPC 0 mdash mdash 0 mdash mdashFPCA 8 021 297119864 minus 04 0 mdash mdashOptSpace 10 005 227119864 minus 04 9 097 657119864 minus 04

DC

3 02955

10 064 303119864 minus 04

7 06755

5 324 766119864 minus 04

SVT 0 mdash mdash 0 mdash mdashFPC 0 mdash mdash 0 mdash mdashFPCA 6 009 220119864 minus 04 0 mdash mdashOptSpace 10 012 376119864 minus 04 5 189 795119864 minus 04

DC

4 03920

9 116 440119864 minus 04

SVT 0 mdash mdashFPC 0 mdash mdashFPCA 0 mdash mdashOptSpace 10 022 385119864 minus 04

10 Mathematical Problems in Engineering

Figure 3 Recovered images under four measurement matrices with different numbers of linear constraints From left to right the originalresults under Gauss Binary Sparse Binary and Random Projection measurement From top to bottom 119901 = 800 119901 = 900 and 119901 = 1000

Table 4 Numerical results for problems with119898 = 119899 = 100 and SR = 03

Alg 119896 FR NS AT RE 119896 FR NS AT RADC

1 00663

10 019 149E minus 04

7 04530

10 061 364E minus 04SVT 10 068 166E minus 04 0 mdash mdashFPC 10 133 447E minus 04 0 mdash mdashFPCA 10 017 612E minus 05 0 mdash mdashOptSpace 10 005 129E minus 04 10 047 339E minus 04DC

2 01320

10 020 169E minus 04

8 05120

10 087 470E minus 04SVT 8 139 166E minus 04 0 mdash mdashFPC 10 218 484E minus 04 0 mdash mdashFPCA 10 016 768E minus 05 0 mdash mdashOptSpace 10 004 148E minus 04 10 063 367E minus 04DC

3 01970

10 025 192E minus 04

9 05730

9 103 485E minus 04SVT 4 177 123E minus 04 0 mdash mdashFPC 9 432 582E minus 04 0 mdash mdashFPCA 9 019 121E minus 04 0 mdash mdashOptSpace 10 007 198E minus 04 10 140 487E minus 04DC

4 02613

10 032 230E minus 04

10 06333

8 136 621E minus 04SVT 4 982 467E minus 04 0 mdash mdashFPC 7 502 659E minus 04 0 mdash mdashFPCA 10 018 663E minus 04 0 mdash mdashOptSpace 10 009 208E minus 04 10 180 597E minus 04DC

5 03250

10 037 272E minus 04

11 06930

9 219 729E minus 04SVT 0 mdash mdash 0 mdash mdashFPC 1 818 734E minus 04 0 mdash mdashFPCA 10 010 220E minus 04 9 mdash mdashOptSpace 10 019 267E minus 04 5 411 705E minus 04DC

6 03880

10 047 300E minus 04

12 07520

6 283 879E minus 04SVT 0 mdash mdash 0 mdash mdashFPC 0 mdash mdash 0 mdash mdashFPCA 0 009 220E minus 04 0 mdash mdashOptSpace 10 025 325E minus 04 6 644 847E minus 04

number of iterations is limited to 1500 When the rank ofmatrix is large (ie 119896 ge 5) the average time of our algorithmis less than that of OptSpace In some cases the relative errorsof DC are minimal When the dimension of matrix becomeslargerDCmethodhas a better performance ofNS and similarrelative errors with that of OptSpace In terms of time ourtime is longer than that of OptSpace but shorter than thatof FPC and SVT Tables 3 and 4 show the above results In

conclusion our method has better performance in ldquohardrdquoproblems that is FR gt 034 The problems involved matricesthat are not of very low rank according to [10]

Table 5 gives the numerical results for problemswith largedimensions In these numerical experiments the maximumnumber of iterations is chosen by default of each comparedmethod We observe that our method can recover problemssuccessfully as other four methods while our method is

Mathematical Problems in Engineering 11

Figure 4 Recovered images under different numbers of linear con-straints From left to right the original results under our methodand results by SeDuMi From top to bottom 119901 = 800 119901 = 900 and119901 = 1000

Table 5 Numerical results for problems with different dimension

119898(119899) 119896 119901 SR FR Alg NS AT RA

100 10 5666 057 034

DC 10 014 191E minus 04SVT 10 281 185E minus 04FPC 10 523 183E minus 04FPCA 10 028 494E minus 05

OptSpace 10 09 141E minus 04

200 10 15665 039 025

DC 10 080 165E minus 04SVT 10 293 253E minus 04FPC 10 418 273E minus 04FPCA 10 008 688E minus 05

OptSpace 10 041 172E minus 04

300 10 27160 030 022

DC 10 272 167E minus 04SVT 10 257 179E minus 04FPC 10 98 163E minus 04FPCA 10 039 671E minus 05

OptSpace 10 185 150E minus 04

400 10 41269 026 019

DC 10 573 156E minus 04SVT 10 278 166E minus 04FPC 10 1328 130E minus 04FPCA 10 073 665E minus 05

OptSpace 10 347 146E minus 04

500 10 49471 020 02

DC 10 1363 173E minus 04SVT 10 294 174E minus 04FPC 10 2575 144E minus 04FPCA 10 112 107E minus 04

OptSpace 10 505 160E minus 04

1000 10 119406 012 017

DC 10 35641 170E minus 04SVT 10 456 166E minus 04FPC 10 11053 129E minus 04FPCA 10 516 164E minus 04

OptSpace 10 1461 150E minus 04

slower with the increase of dimension However our methodis faster than FPC Though SVT FPCA and OptSpace out-perform our method in terms of speed they are specifically

developedmethod formatrix completion problemswhile ourmethod can be applied to more class of problems To provethis we will apply our method to max-cut problems

33 Max-Cut Problems In this subsection we apply ourmethod tomax-cut problems It can be formulated as follows

min Tr (119871119883)st diag (119883) = 119890

rank (119883) = 1

119883 ⪰ 0

(51)

where119883 = 119909119909119879 119909 = (1199091 119909

119899)119879 119871 = (14)(diag(119882119890) minus119882)

119890 = (1 1 1)119879 and119882 is the weightmatrix Equation (51) is

one general case of problem (3) with Γ cap Ω = 119883 | diag(119883) =119890119883 isin 119878

+ Hence our method can be suitably applied to (51)

Wenowaddress the initialization and termination criteriafor our method when applied to (51) We set 1198830 = 119890119890119879 inaddition we choose the initial penalty parameter 120588 to be 001and set the parameter 120591 to be 11 And we use 119883119894+1minus119883119894

119865le 120576

as the termination criteria with 120598 = 10minus5In our numerical experiments we first test several small

scale max-cut problems then we perform our method onthe standard ldquo119866rdquo set problems written by Professor Ginaldi[40] In all we consider 10 problems ranging in size from800 variables to 1000 variables The results are shown inTables 6 and 7 in which ldquo119898rdquo and ldquo119896rdquo represent correspondingdimension of graph and the rank of solution respectivelyTheldquoobvrdquo denotes objective values of optimal matrix solutionsoperated by our method

In Table 6 ldquoKVrdquo denotes known optimal objective val-ues and ldquoUBrdquo presents upper bounds for max-cut by thesemidefinite relaxation It is easy to observe that our methodis effective for solving these problems and it has computed theglobal optimal solutions of all problems Moreover we obtainlower rank solution compared with semidefinite relaxationmethod

Table 7 shows the numerical results for standard ldquo119866rdquo setproblems ldquoSUBrdquo denotes upper bounds by spectral bundlemethod proposed by Helmberg and Rendl [41] ldquoGW-cutrdquodenotes the approximative algorithm proposed by Goemansand Williamson [42] ldquoDCrdquo denotes results generated byour method DC+ is the results of DC with neighborhoodapproximation algorithm and the right most column titledBK gives best results as we have known according to [43 44]We use ldquomdashrdquo to indicate that we do not obtain the rank FromTable 3 we can note that our optimal values are close tothe results of latest researches better than values given byGW-cut and they are all less than the corresponding upperbounds For the problems we all get the lower rank solutioncompared with GW-cut moreover most solutions satisfy theconstraints rank = 1 The results prove that our algorithm iseffective when used to solve max-cut problems

4 Concluding Remarks

In this paper we used closed-form of penalty function toreformulate convex constrained rankminimization problems

12 Mathematical Problems in Engineering

Table 6 Numerical results for famous max-cut problems

Graph 119898 KV GW-cut DC Optimal solutionsUB 119896 obv 119896

C5 5 4 4525 5 4 1 (minus1 1 minus1 1 minus1)K5 5 6 625 5 6 1 (minus1 1 minus1 1 minus1)AG 5 928 96040 5 928 1 (minus1 1 1 minus1 minus1)BG 12 88 903919 12 88 1 (minus1 1 1 minus1 minus1 minus1 minus1 minus1 minus1 1 1 1)

Table 7 Numerical results for G problems

Graph 119898 SUB GW-cut DC DC+ BKobv 119896 obv 119896

G01 800 12078 10612 14 11281 1 11518 11624G02 800 12084 10617 31 11261 1 11550 11620G03 800 12077 10610 14 11275 1 11540 11622G11 800 627 551 30 494 2 528 564G14 800 3187 2800 13 2806 1 2980 3063G15 800 3169 2785 70 2767 1 2969 3050G16 800 3172 2782 14 2800 1 2974 3052G43 1000 7027 6174 mdash 6321 1 6519 6660G45 1000 7020 6168 mdash 6348 1 6564 6654G52 1000 4009 3520 mdash 3383 1 3713 3849

as DC programming which could be solved by stepwiselinear approximative method Under some suitable assump-tions we showed that accumulation point of the sequencegenerated by the stepwise linear approximative method was astable point The numerical results on affine rank minimiza-tion and max-cut problems demonstrate that our method isgenerally comparable or superior to some existing methodsin terms of solution quality and it can be applied to a classof problems However the speed of our method needs to beimproved

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

The authors need to acknowledge the fund projectsWanpingYang and Jinkai Zhao are supported by Key Projects of Phi-losophy and Social Sciences Research (Ministry of EducationChina 15JZD012) Soft Science Fund of Shaanxi Province ofChina (2015KRM114) and the Fundamental Research Fundsfor the Central Universities Fengmin Xu is supported bythe Fundamental Research Funds for the Central Universitiesand China NSFC Projects (no 11101325)

References

[1] K-C Toh and S Yun ldquoAn accelerated proximal gradient algo-rithm for nuclear norm regularized linear least squares prob-lemsrdquo Pacific Journal of Optimization vol 6 no 3 pp 615ndash6402010

[2] Y Amit M Fink N Srebro and S Ullman ldquoUncovering sharedstructures in multiclass classificationrdquo in Proceedings of the 24thInternational Conference on Machine Learning (ICML rsquo07) pp17ndash24 ACM Corvallis Ore USA June 2007

[3] L El Ghaoui and P Gahinet ldquoRank minimization under LMIconstraints a framework for output feedback problemsrdquo in Pro-ceedings of the European Control Conference (ECC rsquo93) GahinetP Rank minimization under LMI constraints Groningen TheNetherlands June-July 1993

[4] M Fazel HHindi and S P Boyd ldquoA rankminimization heuris-tic with application to minimum order system approximationrdquoin Proceedings of the IEEE American Control Conference (ACCrsquo01) vol 6 pp 4734ndash4739 Arlington Va USA June 2001

[5] M W Trosset ldquoDistance matrix completion by numericaloptimizationrdquo Computational Optimization and Applicationsvol 17 no 1 pp 11ndash22 2000

[6] B Recht M Fazel and P A Parrilo ldquoGuaranteed minimum-rank solutions of linear matrix equations via nuclear normminimizationrdquo SIAM Review vol 52 no 3 pp 471ndash501 2010

[7] L Wu ldquoFast at-the-money calibration of the libor marketmodel through lagrange multipliersrdquo Journal of ComputationalFinance vol 6 no 2 pp 39ndash77 2003

[8] Z Zhang and L Wu ldquoOptimal low-rank approximation to acorrelationmatrixrdquo Linear Algebra and its Applications vol 364pp 161ndash187 2003

[9] J F Cai E J Candes andZ Shen ldquoA singular value thresholdingalgorithm for matrix completionrdquo SIAM Journal on Optimiza-tion vol 20 no 4 pp 1956ndash1982 2010

[10] S Ma D Goldfarb and L Chen ldquoFixed point and Bregmaniterative methods for matrix rank minimizationrdquoMathematicalProgramming vol 128 no 1-2 pp 321ndash353 2011

[11] K Barman and O Dabeer ldquoAnalysis of a collaborative filterbased on popularity amongst neighborsrdquo IEEE Transactions onInformation Theory vol 58 no 12 pp 7110ndash7134 2012

[12] S T Aditya O Dabeer and B K Dey ldquoA channel codingperspective of collaborative filteringrdquo IEEE Transactions onInformation Theory vol 57 no 4 pp 2327ndash2341 2011

[13] N E Zubov EAMikrinM SMisrikhanovVN RyabchenkoS N Timakov and E A Cheremnykh ldquoIdentification of theposition of an equilibrium attitude of the international spacestation as a problem of stable matrix completionrdquo Journal ofComputer and Systems Sciences International vol 51 no 2 pp291ndash305 2012

[14] T M Guerra and L Vermeiren ldquoLMI-based relaxed non-quadratic stabilization conditions for nonlinear systems in theTakagi-Sugenorsquos formrdquo Automatica vol 40 no 5 pp 823ndash8292004

[15] J J Meng W Yin H Li E Hossain and Z Han ldquoCollaborativespectrum sensing from sparse observations in cognitive radionetworksrdquo IEEE Journal on Selected Areas in Communicationsvol 29 no 2 pp 327ndash337 2011

Mathematical Problems in Engineering 13

[16] Z Q Dong S Anand and R Chandramouli ldquoEstimation ofmissing RTTs in computer networks matrix completion vscompressed sensingrdquo Computer Networks vol 55 no 15 pp3364ndash3375 2011

[17] JWright A Y Yang A Ganesh S S Sastry and YMa ldquoRobustface recognition via sparse representationrdquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 31 no 2 pp210ndash227 2009

[18] H Ji C Liu Z Shen and Y Xu ldquoRobust video denoisingusing Low rank matrix completionrdquo in Proceedings of the IEEEComputer Society Conference on Computer Vision and PatternRecognition (CVPR rsquo10) pp 1791ndash1798 San Francisco CalifUSA June 2010

[19] W U Bajwa J Haupt A M Sayeed and R Nowak ldquoCom-pressed channel sensing a new approach to estimating sparsemultipath channelsrdquo Proceedings of the IEEE vol 98 no 6 pp1058ndash1076 2010

[20] G Taubock F Hlawatsch D Eiwen and H Rauhut ldquoCom-pressive estimation of doubly selective channels in multicarriersystems leakage effects and sparsity-enhancing processingrdquoIEEE Journal on Selected Topics in Signal Processing vol 4 no2 pp 255ndash271 2010

[21] Y Tachwali W J Barnes F Basma and H H Refai ldquoThefeasibility of a fast fourier sampling technique for wirelessmicrophone detection in IEEE 80222 air interfacerdquo in Pro-ceedings of the IEEE Conference on Computer CommunicationsWorkshops (INFOCOM rsquo02) pp 1ndash5 IEEE San Diego CalifUSA March 2010

[22] J Meng W Yin H Li E Houssain and Z Han ldquoCollabora-tive spectrum sensing from sparse observations using matrixcompletion for cognitive radio networksrdquo in Proceedings of theIEEE International Conference on Acoustics Speech and SignalProcessing (ICASSP rsquo10) pp 3114ndash3117 Dallas Tex USA March2010

[23] S Pudlewski and T Melodia ldquoOn the performance of compres-sive video streaming for wireless multimedia sensor networksrdquoin Proceedings of the IEEE International Conference on Commu-nications (ICC rsquo10) pp 1ndash5 IEEECapeTown SouthAfricaMay2010

[24] A H Mohseni M Babaie-Zadeh and C Jutten ldquoInflatingcompressed samples a joint source-channel coding approachfor noise-resistant compressed sensingrdquo in Proceedings of theIEEE International Conference on Acoustics Speech and SignalProcessing (ICASSP rsquo09) pp 2957ndash2960 IEEE Taipei TaiwanApril 2009

[25] S J Benson Y Ye and X Zhang ldquoSolving large-scale sparsesemidefinite programs for combinatorial optimizationrdquo SIAMJournal on Optimization vol 10 no 2 pp 443ndash461 2000

[26] Z Lu Y Zhang and X Li ldquoPenalty decomposition methods forrank minimizationrdquo Optimization Methods and Software vol30 no 3 pp 531ndash558 2015

[27] P Jain R Meka and I S Dhillon ldquoGuaranteed rank mini-mization via singular value projectionrdquo in Advances in NeuralInformation Processing Systems pp 937ndash945 2010

[28] J P Haldar and D Hernando ldquoRank-constrained solutions tolinear matrix equations using powerfactorizationrdquo IEEE SignalProcessing Letters vol 16 no 7 pp 584ndash587 2009

[29] R H Keshavan A Montanari and S Oh ldquoMatrix completionfrom a few entriesrdquo IEEE Transactions on Information Theoryvol 56 no 6 pp 2980ndash2998 2010

[30] YNesterov ANemirovskii andY Ye Interior-Point PolynomialAlgorithms in Convex Programming Society for Industrial andApplied Mathematics Philadelphia Pa USA 1994

[31] M Weimin Matrix completion models with fixed basis coef-ficients and rank regularized problems with hard constraints[PhD thesis] National University of Singapore 2013

[32] M Fazel Matrix rank minimization with applications [PhDthesis] Stanford University 2002

[33] K M Grigoriadis and E B Beran ldquoAlternating projectionalgorithms for linear matrix inequalities problems with rankconstraintsrdquo in Advances in Linear Matrix Inequality Methodsin Control pp 512ndash267 Society for Industrial and AppliedMathematics Philadelphia Pa USA 1999

[34] Y Gao and D Sun ldquoA majorized penalty approach for cali-brating rank constrained correlation matrix problemsrdquo 2010httpwwwmathnusedusgsimmatsundfMajorPenpdf

[35] S Burer R D C Monteiro and Y Zhang ldquoMaximum stable setformulations and heuristics based on continuous optimizationrdquoMathematical Programming vol 94 no 1 pp 137ndash166 2002

[36] R BhatiaMatrix Analysis Springer Berlin Germany 1997[37] Z Lu ldquoSequential convex programming methods for a class

of structured nonlinear programmingrdquo httpsarxivorgabs12103039

[38] R A Horn and C R Johnson Matrix Analysis CambridgeUniversity Press Cambridge UK 1985

[39] B Recht M Fazel and P A Parrilo ldquoGuaranteed minimum-rank solutions of linear matrix equations via nuclear normminimizationrdquo SIAM Review vol 52 no 3 pp 471ndash501 2010

[40] httpwwwstanfordedusimyyyeyyyeGset[41] C Helmberg and F Rendl ldquoA spectral bundle method for

semidefinite programmingrdquo SIAM Journal on Optimizationvol 10 no 3 pp 673ndash696 2000

[42] M X Goemans and D P Williamson ldquoImproved approxima-tion algorithms for maximum cut and satisfiability problemsusing semidefinite programmingrdquo Journal of ACM vol 42 no6 pp 1115ndash1145 1995

[43] G A Kochenberger J-K Hao Z Lu H Wang and F GloverldquoSolving large scale max cut problems via tabu searchrdquo Journalof Heuristics vol 19 no 4 pp 565ndash571 2013

[44] G Palubeckis andVKrivickiene ldquoApplication ofmultistart tabusearch to the max-cut problemrdquo Information Technology andControl vol 31 no 2 pp 29ndash35 2004

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

10 Mathematical Problems in Engineering

Figure 3 Recovered images under four measurement matrices with different numbers of linear constraints From left to right the originalresults under Gauss Binary Sparse Binary and Random Projection measurement From top to bottom 119901 = 800 119901 = 900 and 119901 = 1000

Table 4 Numerical results for problems with119898 = 119899 = 100 and SR = 03

Alg 119896 FR NS AT RE 119896 FR NS AT RADC

1 00663

10 019 149E minus 04

7 04530

10 061 364E minus 04SVT 10 068 166E minus 04 0 mdash mdashFPC 10 133 447E minus 04 0 mdash mdashFPCA 10 017 612E minus 05 0 mdash mdashOptSpace 10 005 129E minus 04 10 047 339E minus 04DC

2 01320

10 020 169E minus 04

8 05120

10 087 470E minus 04SVT 8 139 166E minus 04 0 mdash mdashFPC 10 218 484E minus 04 0 mdash mdashFPCA 10 016 768E minus 05 0 mdash mdashOptSpace 10 004 148E minus 04 10 063 367E minus 04DC

3 01970

10 025 192E minus 04

9 05730

9 103 485E minus 04SVT 4 177 123E minus 04 0 mdash mdashFPC 9 432 582E minus 04 0 mdash mdashFPCA 9 019 121E minus 04 0 mdash mdashOptSpace 10 007 198E minus 04 10 140 487E minus 04DC

4 02613

10 032 230E minus 04

10 06333

8 136 621E minus 04SVT 4 982 467E minus 04 0 mdash mdashFPC 7 502 659E minus 04 0 mdash mdashFPCA 10 018 663E minus 04 0 mdash mdashOptSpace 10 009 208E minus 04 10 180 597E minus 04DC

5 03250

10 037 272E minus 04

11 06930

9 219 729E minus 04SVT 0 mdash mdash 0 mdash mdashFPC 1 818 734E minus 04 0 mdash mdashFPCA 10 010 220E minus 04 9 mdash mdashOptSpace 10 019 267E minus 04 5 411 705E minus 04DC

6 03880

10 047 300E minus 04

12 07520

6 283 879E minus 04SVT 0 mdash mdash 0 mdash mdashFPC 0 mdash mdash 0 mdash mdashFPCA 0 009 220E minus 04 0 mdash mdashOptSpace 10 025 325E minus 04 6 644 847E minus 04

number of iterations is limited to 1500 When the rank ofmatrix is large (ie 119896 ge 5) the average time of our algorithmis less than that of OptSpace In some cases the relative errorsof DC are minimal When the dimension of matrix becomeslargerDCmethodhas a better performance ofNS and similarrelative errors with that of OptSpace In terms of time ourtime is longer than that of OptSpace but shorter than thatof FPC and SVT Tables 3 and 4 show the above results In

conclusion our method has better performance in ldquohardrdquoproblems that is FR gt 034 The problems involved matricesthat are not of very low rank according to [10]

Table 5 gives the numerical results for problemswith largedimensions In these numerical experiments the maximumnumber of iterations is chosen by default of each comparedmethod We observe that our method can recover problemssuccessfully as other four methods while our method is

Mathematical Problems in Engineering 11

Figure 4 Recovered images under different numbers of linear con-straints From left to right the original results under our methodand results by SeDuMi From top to bottom 119901 = 800 119901 = 900 and119901 = 1000

Table 5 Numerical results for problems with different dimension

119898(119899) 119896 119901 SR FR Alg NS AT RA

100 10 5666 057 034

DC 10 014 191E minus 04SVT 10 281 185E minus 04FPC 10 523 183E minus 04FPCA 10 028 494E minus 05

OptSpace 10 09 141E minus 04

200 10 15665 039 025

DC 10 080 165E minus 04SVT 10 293 253E minus 04FPC 10 418 273E minus 04FPCA 10 008 688E minus 05

OptSpace 10 041 172E minus 04

300 10 27160 030 022

DC 10 272 167E minus 04SVT 10 257 179E minus 04FPC 10 98 163E minus 04FPCA 10 039 671E minus 05

OptSpace 10 185 150E minus 04

400 10 41269 026 019

DC 10 573 156E minus 04SVT 10 278 166E minus 04FPC 10 1328 130E minus 04FPCA 10 073 665E minus 05

OptSpace 10 347 146E minus 04

500 10 49471 020 02

DC 10 1363 173E minus 04SVT 10 294 174E minus 04FPC 10 2575 144E minus 04FPCA 10 112 107E minus 04

OptSpace 10 505 160E minus 04

1000 10 119406 012 017

DC 10 35641 170E minus 04SVT 10 456 166E minus 04FPC 10 11053 129E minus 04FPCA 10 516 164E minus 04

OptSpace 10 1461 150E minus 04

slower with the increase of dimension However our methodis faster than FPC Though SVT FPCA and OptSpace out-perform our method in terms of speed they are specifically

developedmethod formatrix completion problemswhile ourmethod can be applied to more class of problems To provethis we will apply our method to max-cut problems

33 Max-Cut Problems In this subsection we apply ourmethod tomax-cut problems It can be formulated as follows

min Tr (119871119883)st diag (119883) = 119890

rank (119883) = 1

119883 ⪰ 0

(51)

where119883 = 119909119909119879 119909 = (1199091 119909

119899)119879 119871 = (14)(diag(119882119890) minus119882)

119890 = (1 1 1)119879 and119882 is the weightmatrix Equation (51) is

one general case of problem (3) with Γ cap Ω = 119883 | diag(119883) =119890119883 isin 119878

+ Hence our method can be suitably applied to (51)

Wenowaddress the initialization and termination criteriafor our method when applied to (51) We set 1198830 = 119890119890119879 inaddition we choose the initial penalty parameter 120588 to be 001and set the parameter 120591 to be 11 And we use 119883119894+1minus119883119894

119865le 120576

as the termination criteria with 120598 = 10minus5In our numerical experiments we first test several small

scale max-cut problems then we perform our method onthe standard ldquo119866rdquo set problems written by Professor Ginaldi[40] In all we consider 10 problems ranging in size from800 variables to 1000 variables The results are shown inTables 6 and 7 in which ldquo119898rdquo and ldquo119896rdquo represent correspondingdimension of graph and the rank of solution respectivelyTheldquoobvrdquo denotes objective values of optimal matrix solutionsoperated by our method

In Table 6 ldquoKVrdquo denotes known optimal objective val-ues and ldquoUBrdquo presents upper bounds for max-cut by thesemidefinite relaxation It is easy to observe that our methodis effective for solving these problems and it has computed theglobal optimal solutions of all problems Moreover we obtainlower rank solution compared with semidefinite relaxationmethod

Table 7 shows the numerical results for standard ldquo119866rdquo setproblems ldquoSUBrdquo denotes upper bounds by spectral bundlemethod proposed by Helmberg and Rendl [41] ldquoGW-cutrdquodenotes the approximative algorithm proposed by Goemansand Williamson [42] ldquoDCrdquo denotes results generated byour method DC+ is the results of DC with neighborhoodapproximation algorithm and the right most column titledBK gives best results as we have known according to [43 44]We use ldquomdashrdquo to indicate that we do not obtain the rank FromTable 3 we can note that our optimal values are close tothe results of latest researches better than values given byGW-cut and they are all less than the corresponding upperbounds For the problems we all get the lower rank solutioncompared with GW-cut moreover most solutions satisfy theconstraints rank = 1 The results prove that our algorithm iseffective when used to solve max-cut problems

4 Concluding Remarks

In this paper we used closed-form of penalty function toreformulate convex constrained rankminimization problems

12 Mathematical Problems in Engineering

Table 6 Numerical results for famous max-cut problems

Graph 119898 KV GW-cut DC Optimal solutionsUB 119896 obv 119896

C5 5 4 4525 5 4 1 (minus1 1 minus1 1 minus1)K5 5 6 625 5 6 1 (minus1 1 minus1 1 minus1)AG 5 928 96040 5 928 1 (minus1 1 1 minus1 minus1)BG 12 88 903919 12 88 1 (minus1 1 1 minus1 minus1 minus1 minus1 minus1 minus1 1 1 1)

Table 7 Numerical results for G problems

Graph 119898 SUB GW-cut DC DC+ BKobv 119896 obv 119896

G01 800 12078 10612 14 11281 1 11518 11624G02 800 12084 10617 31 11261 1 11550 11620G03 800 12077 10610 14 11275 1 11540 11622G11 800 627 551 30 494 2 528 564G14 800 3187 2800 13 2806 1 2980 3063G15 800 3169 2785 70 2767 1 2969 3050G16 800 3172 2782 14 2800 1 2974 3052G43 1000 7027 6174 mdash 6321 1 6519 6660G45 1000 7020 6168 mdash 6348 1 6564 6654G52 1000 4009 3520 mdash 3383 1 3713 3849

as DC programming which could be solved by stepwiselinear approximative method Under some suitable assump-tions we showed that accumulation point of the sequencegenerated by the stepwise linear approximative method was astable point The numerical results on affine rank minimiza-tion and max-cut problems demonstrate that our method isgenerally comparable or superior to some existing methodsin terms of solution quality and it can be applied to a classof problems However the speed of our method needs to beimproved

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

The authors need to acknowledge the fund projectsWanpingYang and Jinkai Zhao are supported by Key Projects of Phi-losophy and Social Sciences Research (Ministry of EducationChina 15JZD012) Soft Science Fund of Shaanxi Province ofChina (2015KRM114) and the Fundamental Research Fundsfor the Central Universities Fengmin Xu is supported bythe Fundamental Research Funds for the Central Universitiesand China NSFC Projects (no 11101325)

References

[1] K-C Toh and S Yun ldquoAn accelerated proximal gradient algo-rithm for nuclear norm regularized linear least squares prob-lemsrdquo Pacific Journal of Optimization vol 6 no 3 pp 615ndash6402010

[2] Y Amit M Fink N Srebro and S Ullman ldquoUncovering sharedstructures in multiclass classificationrdquo in Proceedings of the 24thInternational Conference on Machine Learning (ICML rsquo07) pp17ndash24 ACM Corvallis Ore USA June 2007

[3] L El Ghaoui and P Gahinet ldquoRank minimization under LMIconstraints a framework for output feedback problemsrdquo in Pro-ceedings of the European Control Conference (ECC rsquo93) GahinetP Rank minimization under LMI constraints Groningen TheNetherlands June-July 1993

[4] M Fazel HHindi and S P Boyd ldquoA rankminimization heuris-tic with application to minimum order system approximationrdquoin Proceedings of the IEEE American Control Conference (ACCrsquo01) vol 6 pp 4734ndash4739 Arlington Va USA June 2001

[5] M W Trosset ldquoDistance matrix completion by numericaloptimizationrdquo Computational Optimization and Applicationsvol 17 no 1 pp 11ndash22 2000

[6] B Recht M Fazel and P A Parrilo ldquoGuaranteed minimum-rank solutions of linear matrix equations via nuclear normminimizationrdquo SIAM Review vol 52 no 3 pp 471ndash501 2010

[7] L Wu ldquoFast at-the-money calibration of the libor marketmodel through lagrange multipliersrdquo Journal of ComputationalFinance vol 6 no 2 pp 39ndash77 2003

[8] Z Zhang and L Wu ldquoOptimal low-rank approximation to acorrelationmatrixrdquo Linear Algebra and its Applications vol 364pp 161ndash187 2003

[9] J F Cai E J Candes andZ Shen ldquoA singular value thresholdingalgorithm for matrix completionrdquo SIAM Journal on Optimiza-tion vol 20 no 4 pp 1956ndash1982 2010

[10] S Ma D Goldfarb and L Chen ldquoFixed point and Bregmaniterative methods for matrix rank minimizationrdquoMathematicalProgramming vol 128 no 1-2 pp 321ndash353 2011

[11] K Barman and O Dabeer ldquoAnalysis of a collaborative filterbased on popularity amongst neighborsrdquo IEEE Transactions onInformation Theory vol 58 no 12 pp 7110ndash7134 2012

[12] S T Aditya O Dabeer and B K Dey ldquoA channel codingperspective of collaborative filteringrdquo IEEE Transactions onInformation Theory vol 57 no 4 pp 2327ndash2341 2011

[13] N E Zubov EAMikrinM SMisrikhanovVN RyabchenkoS N Timakov and E A Cheremnykh ldquoIdentification of theposition of an equilibrium attitude of the international spacestation as a problem of stable matrix completionrdquo Journal ofComputer and Systems Sciences International vol 51 no 2 pp291ndash305 2012

[14] T M Guerra and L Vermeiren ldquoLMI-based relaxed non-quadratic stabilization conditions for nonlinear systems in theTakagi-Sugenorsquos formrdquo Automatica vol 40 no 5 pp 823ndash8292004

[15] J J Meng W Yin H Li E Hossain and Z Han ldquoCollaborativespectrum sensing from sparse observations in cognitive radionetworksrdquo IEEE Journal on Selected Areas in Communicationsvol 29 no 2 pp 327ndash337 2011

Mathematical Problems in Engineering 13

[16] Z Q Dong S Anand and R Chandramouli ldquoEstimation ofmissing RTTs in computer networks matrix completion vscompressed sensingrdquo Computer Networks vol 55 no 15 pp3364ndash3375 2011

[17] JWright A Y Yang A Ganesh S S Sastry and YMa ldquoRobustface recognition via sparse representationrdquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 31 no 2 pp210ndash227 2009

[18] H Ji C Liu Z Shen and Y Xu ldquoRobust video denoisingusing Low rank matrix completionrdquo in Proceedings of the IEEEComputer Society Conference on Computer Vision and PatternRecognition (CVPR rsquo10) pp 1791ndash1798 San Francisco CalifUSA June 2010

[19] W U Bajwa J Haupt A M Sayeed and R Nowak ldquoCom-pressed channel sensing a new approach to estimating sparsemultipath channelsrdquo Proceedings of the IEEE vol 98 no 6 pp1058ndash1076 2010

[20] G Taubock F Hlawatsch D Eiwen and H Rauhut ldquoCom-pressive estimation of doubly selective channels in multicarriersystems leakage effects and sparsity-enhancing processingrdquoIEEE Journal on Selected Topics in Signal Processing vol 4 no2 pp 255ndash271 2010

[21] Y Tachwali W J Barnes F Basma and H H Refai ldquoThefeasibility of a fast fourier sampling technique for wirelessmicrophone detection in IEEE 80222 air interfacerdquo in Pro-ceedings of the IEEE Conference on Computer CommunicationsWorkshops (INFOCOM rsquo02) pp 1ndash5 IEEE San Diego CalifUSA March 2010

[22] J Meng W Yin H Li E Houssain and Z Han ldquoCollabora-tive spectrum sensing from sparse observations using matrixcompletion for cognitive radio networksrdquo in Proceedings of theIEEE International Conference on Acoustics Speech and SignalProcessing (ICASSP rsquo10) pp 3114ndash3117 Dallas Tex USA March2010

[23] S Pudlewski and T Melodia ldquoOn the performance of compres-sive video streaming for wireless multimedia sensor networksrdquoin Proceedings of the IEEE International Conference on Commu-nications (ICC rsquo10) pp 1ndash5 IEEECapeTown SouthAfricaMay2010

[24] A H Mohseni M Babaie-Zadeh and C Jutten ldquoInflatingcompressed samples a joint source-channel coding approachfor noise-resistant compressed sensingrdquo in Proceedings of theIEEE International Conference on Acoustics Speech and SignalProcessing (ICASSP rsquo09) pp 2957ndash2960 IEEE Taipei TaiwanApril 2009

[25] S J Benson Y Ye and X Zhang ldquoSolving large-scale sparsesemidefinite programs for combinatorial optimizationrdquo SIAMJournal on Optimization vol 10 no 2 pp 443ndash461 2000

[26] Z Lu Y Zhang and X Li ldquoPenalty decomposition methods forrank minimizationrdquo Optimization Methods and Software vol30 no 3 pp 531ndash558 2015

[27] P Jain R Meka and I S Dhillon ldquoGuaranteed rank mini-mization via singular value projectionrdquo in Advances in NeuralInformation Processing Systems pp 937ndash945 2010

[28] J P Haldar and D Hernando ldquoRank-constrained solutions tolinear matrix equations using powerfactorizationrdquo IEEE SignalProcessing Letters vol 16 no 7 pp 584ndash587 2009

[29] R H Keshavan A Montanari and S Oh ldquoMatrix completionfrom a few entriesrdquo IEEE Transactions on Information Theoryvol 56 no 6 pp 2980ndash2998 2010

[30] YNesterov ANemirovskii andY Ye Interior-Point PolynomialAlgorithms in Convex Programming Society for Industrial andApplied Mathematics Philadelphia Pa USA 1994

[31] M Weimin Matrix completion models with fixed basis coef-ficients and rank regularized problems with hard constraints[PhD thesis] National University of Singapore 2013

[32] M Fazel Matrix rank minimization with applications [PhDthesis] Stanford University 2002

[33] K M Grigoriadis and E B Beran ldquoAlternating projectionalgorithms for linear matrix inequalities problems with rankconstraintsrdquo in Advances in Linear Matrix Inequality Methodsin Control pp 512ndash267 Society for Industrial and AppliedMathematics Philadelphia Pa USA 1999

[34] Y Gao and D Sun ldquoA majorized penalty approach for cali-brating rank constrained correlation matrix problemsrdquo 2010httpwwwmathnusedusgsimmatsundfMajorPenpdf

[35] S Burer R D C Monteiro and Y Zhang ldquoMaximum stable setformulations and heuristics based on continuous optimizationrdquoMathematical Programming vol 94 no 1 pp 137ndash166 2002

[36] R BhatiaMatrix Analysis Springer Berlin Germany 1997[37] Z Lu ldquoSequential convex programming methods for a class

of structured nonlinear programmingrdquo httpsarxivorgabs12103039

[38] R A Horn and C R Johnson Matrix Analysis CambridgeUniversity Press Cambridge UK 1985

[39] B Recht M Fazel and P A Parrilo ldquoGuaranteed minimum-rank solutions of linear matrix equations via nuclear normminimizationrdquo SIAM Review vol 52 no 3 pp 471ndash501 2010

[40] httpwwwstanfordedusimyyyeyyyeGset[41] C Helmberg and F Rendl ldquoA spectral bundle method for

semidefinite programmingrdquo SIAM Journal on Optimizationvol 10 no 3 pp 673ndash696 2000

[42] M X Goemans and D P Williamson ldquoImproved approxima-tion algorithms for maximum cut and satisfiability problemsusing semidefinite programmingrdquo Journal of ACM vol 42 no6 pp 1115ndash1145 1995

[43] G A Kochenberger J-K Hao Z Lu H Wang and F GloverldquoSolving large scale max cut problems via tabu searchrdquo Journalof Heuristics vol 19 no 4 pp 565ndash571 2013

[44] G Palubeckis andVKrivickiene ldquoApplication ofmultistart tabusearch to the max-cut problemrdquo Information Technology andControl vol 31 no 2 pp 29ndash35 2004

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Mathematical Problems in Engineering 11

Figure 4 Recovered images under different numbers of linear con-straints From left to right the original results under our methodand results by SeDuMi From top to bottom 119901 = 800 119901 = 900 and119901 = 1000

Table 5 Numerical results for problems with different dimension

119898(119899) 119896 119901 SR FR Alg NS AT RA

100 10 5666 057 034

DC 10 014 191E minus 04SVT 10 281 185E minus 04FPC 10 523 183E minus 04FPCA 10 028 494E minus 05

OptSpace 10 09 141E minus 04

200 10 15665 039 025

DC 10 080 165E minus 04SVT 10 293 253E minus 04FPC 10 418 273E minus 04FPCA 10 008 688E minus 05

OptSpace 10 041 172E minus 04

300 10 27160 030 022

DC 10 272 167E minus 04SVT 10 257 179E minus 04FPC 10 98 163E minus 04FPCA 10 039 671E minus 05

OptSpace 10 185 150E minus 04

400 10 41269 026 019

DC 10 573 156E minus 04SVT 10 278 166E minus 04FPC 10 1328 130E minus 04FPCA 10 073 665E minus 05

OptSpace 10 347 146E minus 04

500 10 49471 020 02

DC 10 1363 173E minus 04SVT 10 294 174E minus 04FPC 10 2575 144E minus 04FPCA 10 112 107E minus 04

OptSpace 10 505 160E minus 04

1000 10 119406 012 017

DC 10 35641 170E minus 04SVT 10 456 166E minus 04FPC 10 11053 129E minus 04FPCA 10 516 164E minus 04

OptSpace 10 1461 150E minus 04

slower with the increase of dimension However our methodis faster than FPC Though SVT FPCA and OptSpace out-perform our method in terms of speed they are specifically

developedmethod formatrix completion problemswhile ourmethod can be applied to more class of problems To provethis we will apply our method to max-cut problems

33 Max-Cut Problems In this subsection we apply ourmethod tomax-cut problems It can be formulated as follows

min Tr (119871119883)st diag (119883) = 119890

rank (119883) = 1

119883 ⪰ 0

(51)

where119883 = 119909119909119879 119909 = (1199091 119909

119899)119879 119871 = (14)(diag(119882119890) minus119882)

119890 = (1 1 1)119879 and119882 is the weightmatrix Equation (51) is

one general case of problem (3) with Γ cap Ω = 119883 | diag(119883) =119890119883 isin 119878

+ Hence our method can be suitably applied to (51)

Wenowaddress the initialization and termination criteriafor our method when applied to (51) We set 1198830 = 119890119890119879 inaddition we choose the initial penalty parameter 120588 to be 001and set the parameter 120591 to be 11 And we use 119883119894+1minus119883119894

119865le 120576

as the termination criteria with 120598 = 10minus5In our numerical experiments we first test several small

scale max-cut problems then we perform our method onthe standard ldquo119866rdquo set problems written by Professor Ginaldi[40] In all we consider 10 problems ranging in size from800 variables to 1000 variables The results are shown inTables 6 and 7 in which ldquo119898rdquo and ldquo119896rdquo represent correspondingdimension of graph and the rank of solution respectivelyTheldquoobvrdquo denotes objective values of optimal matrix solutionsoperated by our method

In Table 6 ldquoKVrdquo denotes known optimal objective val-ues and ldquoUBrdquo presents upper bounds for max-cut by thesemidefinite relaxation It is easy to observe that our methodis effective for solving these problems and it has computed theglobal optimal solutions of all problems Moreover we obtainlower rank solution compared with semidefinite relaxationmethod

Table 7 shows the numerical results for standard ldquo119866rdquo setproblems ldquoSUBrdquo denotes upper bounds by spectral bundlemethod proposed by Helmberg and Rendl [41] ldquoGW-cutrdquodenotes the approximative algorithm proposed by Goemansand Williamson [42] ldquoDCrdquo denotes results generated byour method DC+ is the results of DC with neighborhoodapproximation algorithm and the right most column titledBK gives best results as we have known according to [43 44]We use ldquomdashrdquo to indicate that we do not obtain the rank FromTable 3 we can note that our optimal values are close tothe results of latest researches better than values given byGW-cut and they are all less than the corresponding upperbounds For the problems we all get the lower rank solutioncompared with GW-cut moreover most solutions satisfy theconstraints rank = 1 The results prove that our algorithm iseffective when used to solve max-cut problems

4 Concluding Remarks

In this paper we used closed-form of penalty function toreformulate convex constrained rankminimization problems

12 Mathematical Problems in Engineering

Table 6 Numerical results for famous max-cut problems

Graph 119898 KV GW-cut DC Optimal solutionsUB 119896 obv 119896

C5 5 4 4525 5 4 1 (minus1 1 minus1 1 minus1)K5 5 6 625 5 6 1 (minus1 1 minus1 1 minus1)AG 5 928 96040 5 928 1 (minus1 1 1 minus1 minus1)BG 12 88 903919 12 88 1 (minus1 1 1 minus1 minus1 minus1 minus1 minus1 minus1 1 1 1)

Table 7 Numerical results for G problems

Graph 119898 SUB GW-cut DC DC+ BKobv 119896 obv 119896

G01 800 12078 10612 14 11281 1 11518 11624G02 800 12084 10617 31 11261 1 11550 11620G03 800 12077 10610 14 11275 1 11540 11622G11 800 627 551 30 494 2 528 564G14 800 3187 2800 13 2806 1 2980 3063G15 800 3169 2785 70 2767 1 2969 3050G16 800 3172 2782 14 2800 1 2974 3052G43 1000 7027 6174 mdash 6321 1 6519 6660G45 1000 7020 6168 mdash 6348 1 6564 6654G52 1000 4009 3520 mdash 3383 1 3713 3849

as DC programming which could be solved by stepwiselinear approximative method Under some suitable assump-tions we showed that accumulation point of the sequencegenerated by the stepwise linear approximative method was astable point The numerical results on affine rank minimiza-tion and max-cut problems demonstrate that our method isgenerally comparable or superior to some existing methodsin terms of solution quality and it can be applied to a classof problems However the speed of our method needs to beimproved

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

The authors need to acknowledge the fund projectsWanpingYang and Jinkai Zhao are supported by Key Projects of Phi-losophy and Social Sciences Research (Ministry of EducationChina 15JZD012) Soft Science Fund of Shaanxi Province ofChina (2015KRM114) and the Fundamental Research Fundsfor the Central Universities Fengmin Xu is supported bythe Fundamental Research Funds for the Central Universitiesand China NSFC Projects (no 11101325)

References

[1] K-C Toh and S Yun ldquoAn accelerated proximal gradient algo-rithm for nuclear norm regularized linear least squares prob-lemsrdquo Pacific Journal of Optimization vol 6 no 3 pp 615ndash6402010

[2] Y Amit M Fink N Srebro and S Ullman ldquoUncovering sharedstructures in multiclass classificationrdquo in Proceedings of the 24thInternational Conference on Machine Learning (ICML rsquo07) pp17ndash24 ACM Corvallis Ore USA June 2007

[3] L El Ghaoui and P Gahinet ldquoRank minimization under LMIconstraints a framework for output feedback problemsrdquo in Pro-ceedings of the European Control Conference (ECC rsquo93) GahinetP Rank minimization under LMI constraints Groningen TheNetherlands June-July 1993

[4] M Fazel HHindi and S P Boyd ldquoA rankminimization heuris-tic with application to minimum order system approximationrdquoin Proceedings of the IEEE American Control Conference (ACCrsquo01) vol 6 pp 4734ndash4739 Arlington Va USA June 2001

[5] M W Trosset ldquoDistance matrix completion by numericaloptimizationrdquo Computational Optimization and Applicationsvol 17 no 1 pp 11ndash22 2000

[6] B Recht M Fazel and P A Parrilo ldquoGuaranteed minimum-rank solutions of linear matrix equations via nuclear normminimizationrdquo SIAM Review vol 52 no 3 pp 471ndash501 2010

[7] L Wu ldquoFast at-the-money calibration of the libor marketmodel through lagrange multipliersrdquo Journal of ComputationalFinance vol 6 no 2 pp 39ndash77 2003

[8] Z Zhang and L Wu ldquoOptimal low-rank approximation to acorrelationmatrixrdquo Linear Algebra and its Applications vol 364pp 161ndash187 2003

[9] J F Cai E J Candes andZ Shen ldquoA singular value thresholdingalgorithm for matrix completionrdquo SIAM Journal on Optimiza-tion vol 20 no 4 pp 1956ndash1982 2010

[10] S Ma D Goldfarb and L Chen ldquoFixed point and Bregmaniterative methods for matrix rank minimizationrdquoMathematicalProgramming vol 128 no 1-2 pp 321ndash353 2011

[11] K Barman and O Dabeer ldquoAnalysis of a collaborative filterbased on popularity amongst neighborsrdquo IEEE Transactions onInformation Theory vol 58 no 12 pp 7110ndash7134 2012

[12] S T Aditya O Dabeer and B K Dey ldquoA channel codingperspective of collaborative filteringrdquo IEEE Transactions onInformation Theory vol 57 no 4 pp 2327ndash2341 2011

[13] N E Zubov EAMikrinM SMisrikhanovVN RyabchenkoS N Timakov and E A Cheremnykh ldquoIdentification of theposition of an equilibrium attitude of the international spacestation as a problem of stable matrix completionrdquo Journal ofComputer and Systems Sciences International vol 51 no 2 pp291ndash305 2012

[14] T M Guerra and L Vermeiren ldquoLMI-based relaxed non-quadratic stabilization conditions for nonlinear systems in theTakagi-Sugenorsquos formrdquo Automatica vol 40 no 5 pp 823ndash8292004

[15] J J Meng W Yin H Li E Hossain and Z Han ldquoCollaborativespectrum sensing from sparse observations in cognitive radionetworksrdquo IEEE Journal on Selected Areas in Communicationsvol 29 no 2 pp 327ndash337 2011

Mathematical Problems in Engineering 13

[16] Z Q Dong S Anand and R Chandramouli ldquoEstimation ofmissing RTTs in computer networks matrix completion vscompressed sensingrdquo Computer Networks vol 55 no 15 pp3364ndash3375 2011

[17] JWright A Y Yang A Ganesh S S Sastry and YMa ldquoRobustface recognition via sparse representationrdquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 31 no 2 pp210ndash227 2009

[18] H Ji C Liu Z Shen and Y Xu ldquoRobust video denoisingusing Low rank matrix completionrdquo in Proceedings of the IEEEComputer Society Conference on Computer Vision and PatternRecognition (CVPR rsquo10) pp 1791ndash1798 San Francisco CalifUSA June 2010

[19] W U Bajwa J Haupt A M Sayeed and R Nowak ldquoCom-pressed channel sensing a new approach to estimating sparsemultipath channelsrdquo Proceedings of the IEEE vol 98 no 6 pp1058ndash1076 2010

[20] G Taubock F Hlawatsch D Eiwen and H Rauhut ldquoCom-pressive estimation of doubly selective channels in multicarriersystems leakage effects and sparsity-enhancing processingrdquoIEEE Journal on Selected Topics in Signal Processing vol 4 no2 pp 255ndash271 2010

[21] Y Tachwali W J Barnes F Basma and H H Refai ldquoThefeasibility of a fast fourier sampling technique for wirelessmicrophone detection in IEEE 80222 air interfacerdquo in Pro-ceedings of the IEEE Conference on Computer CommunicationsWorkshops (INFOCOM rsquo02) pp 1ndash5 IEEE San Diego CalifUSA March 2010

[22] J Meng W Yin H Li E Houssain and Z Han ldquoCollabora-tive spectrum sensing from sparse observations using matrixcompletion for cognitive radio networksrdquo in Proceedings of theIEEE International Conference on Acoustics Speech and SignalProcessing (ICASSP rsquo10) pp 3114ndash3117 Dallas Tex USA March2010

[23] S Pudlewski and T Melodia ldquoOn the performance of compres-sive video streaming for wireless multimedia sensor networksrdquoin Proceedings of the IEEE International Conference on Commu-nications (ICC rsquo10) pp 1ndash5 IEEECapeTown SouthAfricaMay2010

[24] A H Mohseni M Babaie-Zadeh and C Jutten ldquoInflatingcompressed samples a joint source-channel coding approachfor noise-resistant compressed sensingrdquo in Proceedings of theIEEE International Conference on Acoustics Speech and SignalProcessing (ICASSP rsquo09) pp 2957ndash2960 IEEE Taipei TaiwanApril 2009

[25] S J Benson Y Ye and X Zhang ldquoSolving large-scale sparsesemidefinite programs for combinatorial optimizationrdquo SIAMJournal on Optimization vol 10 no 2 pp 443ndash461 2000

[26] Z Lu Y Zhang and X Li ldquoPenalty decomposition methods forrank minimizationrdquo Optimization Methods and Software vol30 no 3 pp 531ndash558 2015

[27] P Jain R Meka and I S Dhillon ldquoGuaranteed rank mini-mization via singular value projectionrdquo in Advances in NeuralInformation Processing Systems pp 937ndash945 2010

[28] J P Haldar and D Hernando ldquoRank-constrained solutions tolinear matrix equations using powerfactorizationrdquo IEEE SignalProcessing Letters vol 16 no 7 pp 584ndash587 2009

[29] R H Keshavan A Montanari and S Oh ldquoMatrix completionfrom a few entriesrdquo IEEE Transactions on Information Theoryvol 56 no 6 pp 2980ndash2998 2010

[30] YNesterov ANemirovskii andY Ye Interior-Point PolynomialAlgorithms in Convex Programming Society for Industrial andApplied Mathematics Philadelphia Pa USA 1994

[31] M Weimin Matrix completion models with fixed basis coef-ficients and rank regularized problems with hard constraints[PhD thesis] National University of Singapore 2013

[32] M Fazel Matrix rank minimization with applications [PhDthesis] Stanford University 2002

[33] K M Grigoriadis and E B Beran ldquoAlternating projectionalgorithms for linear matrix inequalities problems with rankconstraintsrdquo in Advances in Linear Matrix Inequality Methodsin Control pp 512ndash267 Society for Industrial and AppliedMathematics Philadelphia Pa USA 1999

[34] Y Gao and D Sun ldquoA majorized penalty approach for cali-brating rank constrained correlation matrix problemsrdquo 2010httpwwwmathnusedusgsimmatsundfMajorPenpdf

[35] S Burer R D C Monteiro and Y Zhang ldquoMaximum stable setformulations and heuristics based on continuous optimizationrdquoMathematical Programming vol 94 no 1 pp 137ndash166 2002

[36] R BhatiaMatrix Analysis Springer Berlin Germany 1997[37] Z Lu ldquoSequential convex programming methods for a class

of structured nonlinear programmingrdquo httpsarxivorgabs12103039

[38] R A Horn and C R Johnson Matrix Analysis CambridgeUniversity Press Cambridge UK 1985

[39] B Recht M Fazel and P A Parrilo ldquoGuaranteed minimum-rank solutions of linear matrix equations via nuclear normminimizationrdquo SIAM Review vol 52 no 3 pp 471ndash501 2010

[40] httpwwwstanfordedusimyyyeyyyeGset[41] C Helmberg and F Rendl ldquoA spectral bundle method for

semidefinite programmingrdquo SIAM Journal on Optimizationvol 10 no 3 pp 673ndash696 2000

[42] M X Goemans and D P Williamson ldquoImproved approxima-tion algorithms for maximum cut and satisfiability problemsusing semidefinite programmingrdquo Journal of ACM vol 42 no6 pp 1115ndash1145 1995

[43] G A Kochenberger J-K Hao Z Lu H Wang and F GloverldquoSolving large scale max cut problems via tabu searchrdquo Journalof Heuristics vol 19 no 4 pp 565ndash571 2013

[44] G Palubeckis andVKrivickiene ldquoApplication ofmultistart tabusearch to the max-cut problemrdquo Information Technology andControl vol 31 no 2 pp 29ndash35 2004

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

12 Mathematical Problems in Engineering

Table 6 Numerical results for famous max-cut problems

Graph 119898 KV GW-cut DC Optimal solutionsUB 119896 obv 119896

C5 5 4 4525 5 4 1 (minus1 1 minus1 1 minus1)K5 5 6 625 5 6 1 (minus1 1 minus1 1 minus1)AG 5 928 96040 5 928 1 (minus1 1 1 minus1 minus1)BG 12 88 903919 12 88 1 (minus1 1 1 minus1 minus1 minus1 minus1 minus1 minus1 1 1 1)

Table 7 Numerical results for G problems

Graph 119898 SUB GW-cut DC DC+ BKobv 119896 obv 119896

G01 800 12078 10612 14 11281 1 11518 11624G02 800 12084 10617 31 11261 1 11550 11620G03 800 12077 10610 14 11275 1 11540 11622G11 800 627 551 30 494 2 528 564G14 800 3187 2800 13 2806 1 2980 3063G15 800 3169 2785 70 2767 1 2969 3050G16 800 3172 2782 14 2800 1 2974 3052G43 1000 7027 6174 mdash 6321 1 6519 6660G45 1000 7020 6168 mdash 6348 1 6564 6654G52 1000 4009 3520 mdash 3383 1 3713 3849

as DC programming which could be solved by stepwiselinear approximative method Under some suitable assump-tions we showed that accumulation point of the sequencegenerated by the stepwise linear approximative method was astable point The numerical results on affine rank minimiza-tion and max-cut problems demonstrate that our method isgenerally comparable or superior to some existing methodsin terms of solution quality and it can be applied to a classof problems However the speed of our method needs to beimproved

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

The authors need to acknowledge the fund projectsWanpingYang and Jinkai Zhao are supported by Key Projects of Phi-losophy and Social Sciences Research (Ministry of EducationChina 15JZD012) Soft Science Fund of Shaanxi Province ofChina (2015KRM114) and the Fundamental Research Fundsfor the Central Universities Fengmin Xu is supported bythe Fundamental Research Funds for the Central Universitiesand China NSFC Projects (no 11101325)

References

[1] K-C Toh and S Yun ldquoAn accelerated proximal gradient algo-rithm for nuclear norm regularized linear least squares prob-lemsrdquo Pacific Journal of Optimization vol 6 no 3 pp 615ndash6402010

[2] Y Amit M Fink N Srebro and S Ullman ldquoUncovering sharedstructures in multiclass classificationrdquo in Proceedings of the 24thInternational Conference on Machine Learning (ICML rsquo07) pp17ndash24 ACM Corvallis Ore USA June 2007

[3] L El Ghaoui and P Gahinet ldquoRank minimization under LMIconstraints a framework for output feedback problemsrdquo in Pro-ceedings of the European Control Conference (ECC rsquo93) GahinetP Rank minimization under LMI constraints Groningen TheNetherlands June-July 1993

[4] M Fazel HHindi and S P Boyd ldquoA rankminimization heuris-tic with application to minimum order system approximationrdquoin Proceedings of the IEEE American Control Conference (ACCrsquo01) vol 6 pp 4734ndash4739 Arlington Va USA June 2001

[5] M W Trosset ldquoDistance matrix completion by numericaloptimizationrdquo Computational Optimization and Applicationsvol 17 no 1 pp 11ndash22 2000

[6] B Recht M Fazel and P A Parrilo ldquoGuaranteed minimum-rank solutions of linear matrix equations via nuclear normminimizationrdquo SIAM Review vol 52 no 3 pp 471ndash501 2010

[7] L Wu ldquoFast at-the-money calibration of the libor marketmodel through lagrange multipliersrdquo Journal of ComputationalFinance vol 6 no 2 pp 39ndash77 2003

[8] Z Zhang and L Wu ldquoOptimal low-rank approximation to acorrelationmatrixrdquo Linear Algebra and its Applications vol 364pp 161ndash187 2003

[9] J F Cai E J Candes andZ Shen ldquoA singular value thresholdingalgorithm for matrix completionrdquo SIAM Journal on Optimiza-tion vol 20 no 4 pp 1956ndash1982 2010

[10] S Ma D Goldfarb and L Chen ldquoFixed point and Bregmaniterative methods for matrix rank minimizationrdquoMathematicalProgramming vol 128 no 1-2 pp 321ndash353 2011

[11] K Barman and O Dabeer ldquoAnalysis of a collaborative filterbased on popularity amongst neighborsrdquo IEEE Transactions onInformation Theory vol 58 no 12 pp 7110ndash7134 2012

[12] S T Aditya O Dabeer and B K Dey ldquoA channel codingperspective of collaborative filteringrdquo IEEE Transactions onInformation Theory vol 57 no 4 pp 2327ndash2341 2011

[13] N E Zubov EAMikrinM SMisrikhanovVN RyabchenkoS N Timakov and E A Cheremnykh ldquoIdentification of theposition of an equilibrium attitude of the international spacestation as a problem of stable matrix completionrdquo Journal ofComputer and Systems Sciences International vol 51 no 2 pp291ndash305 2012

[14] T M Guerra and L Vermeiren ldquoLMI-based relaxed non-quadratic stabilization conditions for nonlinear systems in theTakagi-Sugenorsquos formrdquo Automatica vol 40 no 5 pp 823ndash8292004

[15] J J Meng W Yin H Li E Hossain and Z Han ldquoCollaborativespectrum sensing from sparse observations in cognitive radionetworksrdquo IEEE Journal on Selected Areas in Communicationsvol 29 no 2 pp 327ndash337 2011

Mathematical Problems in Engineering 13

[16] Z Q Dong S Anand and R Chandramouli ldquoEstimation ofmissing RTTs in computer networks matrix completion vscompressed sensingrdquo Computer Networks vol 55 no 15 pp3364ndash3375 2011

[17] JWright A Y Yang A Ganesh S S Sastry and YMa ldquoRobustface recognition via sparse representationrdquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 31 no 2 pp210ndash227 2009

[18] H Ji C Liu Z Shen and Y Xu ldquoRobust video denoisingusing Low rank matrix completionrdquo in Proceedings of the IEEEComputer Society Conference on Computer Vision and PatternRecognition (CVPR rsquo10) pp 1791ndash1798 San Francisco CalifUSA June 2010

[19] W U Bajwa J Haupt A M Sayeed and R Nowak ldquoCom-pressed channel sensing a new approach to estimating sparsemultipath channelsrdquo Proceedings of the IEEE vol 98 no 6 pp1058ndash1076 2010

[20] G Taubock F Hlawatsch D Eiwen and H Rauhut ldquoCom-pressive estimation of doubly selective channels in multicarriersystems leakage effects and sparsity-enhancing processingrdquoIEEE Journal on Selected Topics in Signal Processing vol 4 no2 pp 255ndash271 2010

[21] Y Tachwali W J Barnes F Basma and H H Refai ldquoThefeasibility of a fast fourier sampling technique for wirelessmicrophone detection in IEEE 80222 air interfacerdquo in Pro-ceedings of the IEEE Conference on Computer CommunicationsWorkshops (INFOCOM rsquo02) pp 1ndash5 IEEE San Diego CalifUSA March 2010

[22] J Meng W Yin H Li E Houssain and Z Han ldquoCollabora-tive spectrum sensing from sparse observations using matrixcompletion for cognitive radio networksrdquo in Proceedings of theIEEE International Conference on Acoustics Speech and SignalProcessing (ICASSP rsquo10) pp 3114ndash3117 Dallas Tex USA March2010

[23] S Pudlewski and T Melodia ldquoOn the performance of compres-sive video streaming for wireless multimedia sensor networksrdquoin Proceedings of the IEEE International Conference on Commu-nications (ICC rsquo10) pp 1ndash5 IEEECapeTown SouthAfricaMay2010

[24] A H Mohseni M Babaie-Zadeh and C Jutten ldquoInflatingcompressed samples a joint source-channel coding approachfor noise-resistant compressed sensingrdquo in Proceedings of theIEEE International Conference on Acoustics Speech and SignalProcessing (ICASSP rsquo09) pp 2957ndash2960 IEEE Taipei TaiwanApril 2009

[25] S J Benson Y Ye and X Zhang ldquoSolving large-scale sparsesemidefinite programs for combinatorial optimizationrdquo SIAMJournal on Optimization vol 10 no 2 pp 443ndash461 2000

[26] Z Lu Y Zhang and X Li ldquoPenalty decomposition methods forrank minimizationrdquo Optimization Methods and Software vol30 no 3 pp 531ndash558 2015

[27] P Jain R Meka and I S Dhillon ldquoGuaranteed rank mini-mization via singular value projectionrdquo in Advances in NeuralInformation Processing Systems pp 937ndash945 2010

[28] J P Haldar and D Hernando ldquoRank-constrained solutions tolinear matrix equations using powerfactorizationrdquo IEEE SignalProcessing Letters vol 16 no 7 pp 584ndash587 2009

[29] R H Keshavan A Montanari and S Oh ldquoMatrix completionfrom a few entriesrdquo IEEE Transactions on Information Theoryvol 56 no 6 pp 2980ndash2998 2010

[30] YNesterov ANemirovskii andY Ye Interior-Point PolynomialAlgorithms in Convex Programming Society for Industrial andApplied Mathematics Philadelphia Pa USA 1994

[31] M Weimin Matrix completion models with fixed basis coef-ficients and rank regularized problems with hard constraints[PhD thesis] National University of Singapore 2013

[32] M Fazel Matrix rank minimization with applications [PhDthesis] Stanford University 2002

[33] K M Grigoriadis and E B Beran ldquoAlternating projectionalgorithms for linear matrix inequalities problems with rankconstraintsrdquo in Advances in Linear Matrix Inequality Methodsin Control pp 512ndash267 Society for Industrial and AppliedMathematics Philadelphia Pa USA 1999

[34] Y Gao and D Sun ldquoA majorized penalty approach for cali-brating rank constrained correlation matrix problemsrdquo 2010httpwwwmathnusedusgsimmatsundfMajorPenpdf

[35] S Burer R D C Monteiro and Y Zhang ldquoMaximum stable setformulations and heuristics based on continuous optimizationrdquoMathematical Programming vol 94 no 1 pp 137ndash166 2002

[36] R BhatiaMatrix Analysis Springer Berlin Germany 1997[37] Z Lu ldquoSequential convex programming methods for a class

of structured nonlinear programmingrdquo httpsarxivorgabs12103039

[38] R A Horn and C R Johnson Matrix Analysis CambridgeUniversity Press Cambridge UK 1985

[39] B Recht M Fazel and P A Parrilo ldquoGuaranteed minimum-rank solutions of linear matrix equations via nuclear normminimizationrdquo SIAM Review vol 52 no 3 pp 471ndash501 2010

[40] httpwwwstanfordedusimyyyeyyyeGset[41] C Helmberg and F Rendl ldquoA spectral bundle method for

semidefinite programmingrdquo SIAM Journal on Optimizationvol 10 no 3 pp 673ndash696 2000

[42] M X Goemans and D P Williamson ldquoImproved approxima-tion algorithms for maximum cut and satisfiability problemsusing semidefinite programmingrdquo Journal of ACM vol 42 no6 pp 1115ndash1145 1995

[43] G A Kochenberger J-K Hao Z Lu H Wang and F GloverldquoSolving large scale max cut problems via tabu searchrdquo Journalof Heuristics vol 19 no 4 pp 565ndash571 2013

[44] G Palubeckis andVKrivickiene ldquoApplication ofmultistart tabusearch to the max-cut problemrdquo Information Technology andControl vol 31 no 2 pp 29ndash35 2004

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Mathematical Problems in Engineering 13

[16] Z Q Dong S Anand and R Chandramouli ldquoEstimation ofmissing RTTs in computer networks matrix completion vscompressed sensingrdquo Computer Networks vol 55 no 15 pp3364ndash3375 2011

[17] JWright A Y Yang A Ganesh S S Sastry and YMa ldquoRobustface recognition via sparse representationrdquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 31 no 2 pp210ndash227 2009

[18] H Ji C Liu Z Shen and Y Xu ldquoRobust video denoisingusing Low rank matrix completionrdquo in Proceedings of the IEEEComputer Society Conference on Computer Vision and PatternRecognition (CVPR rsquo10) pp 1791ndash1798 San Francisco CalifUSA June 2010

[19] W U Bajwa J Haupt A M Sayeed and R Nowak ldquoCom-pressed channel sensing a new approach to estimating sparsemultipath channelsrdquo Proceedings of the IEEE vol 98 no 6 pp1058ndash1076 2010

[20] G Taubock F Hlawatsch D Eiwen and H Rauhut ldquoCom-pressive estimation of doubly selective channels in multicarriersystems leakage effects and sparsity-enhancing processingrdquoIEEE Journal on Selected Topics in Signal Processing vol 4 no2 pp 255ndash271 2010

[21] Y Tachwali W J Barnes F Basma and H H Refai ldquoThefeasibility of a fast fourier sampling technique for wirelessmicrophone detection in IEEE 80222 air interfacerdquo in Pro-ceedings of the IEEE Conference on Computer CommunicationsWorkshops (INFOCOM rsquo02) pp 1ndash5 IEEE San Diego CalifUSA March 2010

[22] J Meng W Yin H Li E Houssain and Z Han ldquoCollabora-tive spectrum sensing from sparse observations using matrixcompletion for cognitive radio networksrdquo in Proceedings of theIEEE International Conference on Acoustics Speech and SignalProcessing (ICASSP rsquo10) pp 3114ndash3117 Dallas Tex USA March2010

[23] S Pudlewski and T Melodia ldquoOn the performance of compres-sive video streaming for wireless multimedia sensor networksrdquoin Proceedings of the IEEE International Conference on Commu-nications (ICC rsquo10) pp 1ndash5 IEEECapeTown SouthAfricaMay2010

[24] A H Mohseni M Babaie-Zadeh and C Jutten ldquoInflatingcompressed samples a joint source-channel coding approachfor noise-resistant compressed sensingrdquo in Proceedings of theIEEE International Conference on Acoustics Speech and SignalProcessing (ICASSP rsquo09) pp 2957ndash2960 IEEE Taipei TaiwanApril 2009

[25] S J Benson Y Ye and X Zhang ldquoSolving large-scale sparsesemidefinite programs for combinatorial optimizationrdquo SIAMJournal on Optimization vol 10 no 2 pp 443ndash461 2000

[26] Z Lu Y Zhang and X Li ldquoPenalty decomposition methods forrank minimizationrdquo Optimization Methods and Software vol30 no 3 pp 531ndash558 2015

[27] P Jain R Meka and I S Dhillon ldquoGuaranteed rank mini-mization via singular value projectionrdquo in Advances in NeuralInformation Processing Systems pp 937ndash945 2010

[28] J P Haldar and D Hernando ldquoRank-constrained solutions tolinear matrix equations using powerfactorizationrdquo IEEE SignalProcessing Letters vol 16 no 7 pp 584ndash587 2009

[29] R H Keshavan A Montanari and S Oh ldquoMatrix completionfrom a few entriesrdquo IEEE Transactions on Information Theoryvol 56 no 6 pp 2980ndash2998 2010

[30] YNesterov ANemirovskii andY Ye Interior-Point PolynomialAlgorithms in Convex Programming Society for Industrial andApplied Mathematics Philadelphia Pa USA 1994

[31] M Weimin Matrix completion models with fixed basis coef-ficients and rank regularized problems with hard constraints[PhD thesis] National University of Singapore 2013

[32] M Fazel Matrix rank minimization with applications [PhDthesis] Stanford University 2002

[33] K M Grigoriadis and E B Beran ldquoAlternating projectionalgorithms for linear matrix inequalities problems with rankconstraintsrdquo in Advances in Linear Matrix Inequality Methodsin Control pp 512ndash267 Society for Industrial and AppliedMathematics Philadelphia Pa USA 1999

[34] Y Gao and D Sun ldquoA majorized penalty approach for cali-brating rank constrained correlation matrix problemsrdquo 2010httpwwwmathnusedusgsimmatsundfMajorPenpdf

[35] S Burer R D C Monteiro and Y Zhang ldquoMaximum stable setformulations and heuristics based on continuous optimizationrdquoMathematical Programming vol 94 no 1 pp 137ndash166 2002

[36] R BhatiaMatrix Analysis Springer Berlin Germany 1997[37] Z Lu ldquoSequential convex programming methods for a class

of structured nonlinear programmingrdquo httpsarxivorgabs12103039

[38] R A Horn and C R Johnson Matrix Analysis CambridgeUniversity Press Cambridge UK 1985

[39] B Recht M Fazel and P A Parrilo ldquoGuaranteed minimum-rank solutions of linear matrix equations via nuclear normminimizationrdquo SIAM Review vol 52 no 3 pp 471ndash501 2010

[40] httpwwwstanfordedusimyyyeyyyeGset[41] C Helmberg and F Rendl ldquoA spectral bundle method for

semidefinite programmingrdquo SIAM Journal on Optimizationvol 10 no 3 pp 673ndash696 2000

[42] M X Goemans and D P Williamson ldquoImproved approxima-tion algorithms for maximum cut and satisfiability problemsusing semidefinite programmingrdquo Journal of ACM vol 42 no6 pp 1115ndash1145 1995

[43] G A Kochenberger J-K Hao Z Lu H Wang and F GloverldquoSolving large scale max cut problems via tabu searchrdquo Journalof Heuristics vol 19 no 4 pp 565ndash571 2013

[44] G Palubeckis andVKrivickiene ldquoApplication ofmultistart tabusearch to the max-cut problemrdquo Information Technology andControl vol 31 no 2 pp 29ndash35 2004

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of