IEEE 2 Column Model Format

Embed Size (px)

Citation preview

  • 8/14/2019 IEEE 2 Column Model Format

    1/14

    SPARKS@ELECTROMANIA 2K9

    IMPLEMENTATION OF ANALOG TO INFORMATION COVERTER

    1.Harishchandra dubey(E.C.E),2.Prateesh Raj(E.E)

    1B.Tech ECE, Motilal Nehru National Institute of Technology,

    E-mail: [email protected] B.Tech EE, Motilal Nehru National Institute of Technology,

    E-mail: [email protected]

    Abstract

    In this paper, I have implemented anolog toinformation (by information I mean something givingthe details input analog signal .In most cases ,we arerequired it in the form of an image or another analogsignal) conversion based on theory of compressedSensing (also known as sampling).Sampling process

    coverts the continuous time signals to sequences ofnumbers (samples) that can be processed and stored indigital devices (like buffer or registers).I have used

    buffer for the storage of sampled signals. It is moreappropriate to do sampling of signals by consideringthem as union of linear subspaces rather a single linearspace. .It is because union of sub spaces is a moreappropriate mathematical model for analog signalsthan a single linear vector space.Instead .Recentdevelopment in areas of compressive Sensing(Sampling) and mathemtics including convex

    programming,Uniformprinciple and linear optimization theory have inspiredme do this project.Compressive Sensing allows us todo sampling at a rate comparable to that of signalfrequency unlike the traditional Sensing (which was

    based on Nyquist theorem).The stable samplingcondition is related to important concept ofRestricted Isometry Property(RIP)which helps in selection ofsamples containing maximum information about inputsignal.

    Firstly,Fourier Sampling Algorithm is applied onsampled signals. For storage of sampled signals we use

    buffer(which is unity gain op-amp in our case).Thetransformed signals are analysed by some appropriatealgorithms.(in our case it is based on convex

    programming and uniform uncertainity principle).Upto this stage we have collected only those samplesthat contain maximum information about signal. Now,if required we can reconstruct the signal usingalgorithm 890 in SPARCO : toolbox of MATLAB,elserequired information can be taken from collectedsamples. With the power of MATLAB (language ofmathematical and scientific calculations) ,we have thusimplemented a converter which not only eliminates the

    problem posed by Nyquist sampling theorem but alsocan serve multiple purposes for us.

    1. Introduction

    In real life-situations ,we mostly deal with analogsignals but the processing of signals is done on digitalsystems due to a number of reasons.Also at the same

    time after doing required manipulations or so-calledprocessing of digital signals we have to convert it backinto some information (like an image) or in anotheranalog signal so that it can be used by another deviceor as some information .

    In accordance with Nyquist Sampling theorem it isvery difficult to sample high bandwidth signalespecillay those in radio frequencies because of highsampling rate required for efficient sampling.But ,it isnot the end ,today we are having the modern theory ofCompressive sensing/sampling which can be used toeliminate the problems posed by Nyquist samplingtheorem and at the same time getting a satisfactorylevel of accuracy in sparse recovery of signals.

    Following the same way ,I have tried to implement it adifferent way,instead of using all the sample we will beusing only a few which contain the maximuminformation .The selection of these samples is based onsome mathematical theory involved.

    2. PROBLEM STATEMENT

    Many problems in radar and communication signalprocessing involve radio frequency (RF) typically 30-

    300 GHz signals of very high bandwidth. This presentsa serious challenge to systems that might attempt touse a high-rate Analog-to-Digital Converter(ADC) tosample these signals, as prescribed by theShannon/Nyquist Sampling Theorem(according towhich the highest frequency that can be accuratelyrepresented is less than one half of the sampling rate ).The dashed vertical lines are sample intervals, and the

    blue dots are the crossing points the actual samples

  • 8/14/2019 IEEE 2 Column Model Format

    2/14

    SPARKS@ELECTROMANIA 2K9

    taken by the conversion process. The sampling ratehere is below the Nyquist frequency, so when werereconstruction the waveform we see the problem quitereadily.The power, stability, and low cost of digitalsignal processing (DSP) have pushed the analog-to-

    digital converter (ADC) increasingly close to the front-end of many important sensing, imaging, andcommunication systems. The power, stability, and lowcost of digital signal processing (DSP) have pushed theanalog-to-digital converter (ADC) increasingly closeto the front-end of many important sensing, imaging,andcommunication systems. The power, stability, and lowcost of digital signal processing (DSP) have pushed theanalog-to-digital converter (ADC) increasingly closeto the front-end of many important sensing, imaging,and communication systems.

    The power, stability, and low cost of digital signal

    processing (DSP) have pushed the analog-to-digitalconverter (ADC)increasingly close to the front-end ofmany important sensing, imaging, and communicationSystems.Unfortunately, many systems, especially thoseoperating in the radio frequency (RF) bands, severelystress current ADC technologies.For example, someimportant radar and communications applicationswould be best served by an ADC sampling over 5GSample/s and resolution of over 20 bits, acombination that greatly exceeds current capabilities.

    It could be decades before ADCs based on currenttechnology will be fast and precise enough for theseapplications.And even after better ADCs becomeavailable, the deluge of data will swamp back-end DSPalgorithms. For example, sampling a 1GHz band using2 GSample/s at 16 bits-persample generates data at arate of 4GB/s, enough to fill a modern hard disk inroughly one minute. In a typical application, only atiny fraction of this information is actually relevant;the wideband signals in many RF applications oftenhave a large bandwidth but a small information rate.Fortunately, recent developments in mathematics andsignal processing have uncovered a promisingapproach to the ADC bottleneck that enables sensingat a rate comparable to the signals information rate. Anew field, known as Compressive Sensing (CS)

    establishes mathematically that a relatively smallnumber of non-adaptive, linear measurements canharvest all of the information necessary to faithfullyreconstruct sparse or compressible signals.

    Traditional sampling theories consider the problem ofreconstructing an unknown signal x from a series ofsamples. A prevalent assumption which oftenguarantees recovery from the given measurements is

    that x lies in a known subspace. Recently, there hasbeen growing interest in nonlinear but structured signalmodels, in which x lies in a union of subspaces. In this

    project I develop a general framework for efficientrecovery of such signals from a given set of samples.

    More specifically, we treat the case in which x lies ina sum of k subspaces, chosen from a larger set of m

    possibilities. The samples are modelled as innerproducts with an arbitrary set of sampling functions.To derive an efficient recovery algorithm, we showthat our problem can be formulated as that ofrecovering a block-sparse vector whose non-zeroelements appear in fixed blocks. Our main result is anequivalence condition under which the proposedConvex Algorithm along with uniform uncertainity

    principle is guaranteed to recover the original signal.This result relies on the notion of block restrictedisometry property (RIP), which is a generalization ofthe standard RIP used extensively in the context of

    compressed sensing. Based on RIP we also provestability of our approach in the presence of noise andhave a large bandwidth but a small information rate.Fortunately, recent developments in mathematics andsignal processing have uncovered a promisingapproach to the ADC bottleneck that enables sensing ata rate comparable to the signals information rate. Anew field, known as Compressive Sensing (CS)establishes mathematically that a relatively smallnumber of non-adaptive, linear measurements canharvest all of the information necessary to faithfullyreconstruct sparse or compressible signals.

    Traditional sampling theories consider the problem ofreconstructing an unknown signal x from a series ofsamples. A prevalent assumption which oftenguarantees recovery from the given measurements isthat x lies in a known subspace. Recently, there has

    been growing interest in nonlinear but structured signalmodels, in which x lies in a union of subspaces. In this

    project I develop a general framework for efficientrecovery of such signals from a given set of samples.More specifically, we treat the case in which x lies in asum of k subspaces, chosen from a larger set of m

    possibilities. The samples are modelled as innerproducts with an arbitrary set of sampling functions.To derive an efficient recovery algorithm, we show

    that our problem can be formulated as that ofrecovering a block-sparse vector whose non-zeroelements appear in fixed blocks. Our main result is anequivalence condition under which the proposedConvex Algorithm along with uniform uncertainity

    principle is guaranteed to recover the original signal.This result relies on the notion of block restrictedisometry property (RIP), which is a generalization ofthe standard RIP used extensively in the context of

  • 8/14/2019 IEEE 2 Column Model Format

    3/14

    SPARKS@ELECTROMANIA 2K9

    compressed sensing. Based on RIP we also provestability of our approach in the presence of noise andmodeling errors. Adapting our results to this contextleads to new MMV recovery methods as well asequivalence conditions under which the entire set can

    be determined efficiently.

    3.PROPOSED SOLUTION3.1. OUTLINE OF SOLUTION

    OurAnalog-To-Informationconverter (AIC)isinspired by the recent theory of Compressive Sensing(CS), which states that a discrete signal having asparse representation in some domain can berecovered from a small number of linear projections ofthat signal. We generalize the CS theory to continuous-time sparse signals, explain our proposed AIC systemin the CS context, and discuss practical issuesregarding implementation.

    Analog signals are sampled considering them as unionof linear sub spaces rather than a single space.In mostof the practical applications it is found union of subspaces is a better model for the signal of intertest thatusing Fast Fourier Transform and then stored in a

    buffer .Using Convex Programming and UniformUncertainity Principle we take the sparsed sampledsignals which collect maximum information.Finallyusing 890 algorithm in SPARCO toolbox of matlabwe reconstruct the signal as information in somedesired format.

    3.2..UNION OF FINITE DIMENSIONALLINEAR SUB SPACES MODEL FORSIGNALS OF INTEREST

    A.Subspace SamplingTraditional sampling theory deals with the problem of

    recovering an unknown signal from a set of n

    samples yi=fi( ) where fi() is some function of. Thesignal can be a function of time = x(t), or canrepresent a finite-length vector = x. The mostcommon type of sampling is linear sampling in which

    yion here.

    (1)for a set of function

    Here denotes the standard

    inner product on .For example, if =L2 isthe space of real finite energy signals then

    ..(2)

    ..(3)Non-linear sampling is also there but our focus will beon the Linear one only.When = IRN ,the unknownx= x as well as the sampling functions si = si are vectorsin IRN. Therefore, the samples can be writtenconveniently in matrix form as y = ST x, where S is thematrix with columns functions si .In the more generalcase in which =L2 or any other abstract Hilbert

    space, we can use the set transformation notation inorder to conveniently represent the samples. A settransformation S : IRN corresponding to samplingvectors

    { si , 1 i n} is defined by

    nSc = sii=1

    (4)

    for all c IRN. From the definition of the adjoint,if c = s*x, then c(i) = .

    Note that when = IRN , S = S and S* = ST .Usingthis notation, we can always express the samples as

    y= S* x.(5)

    where S is a set transformation for arbitrary and anappropriate matrix when = IRN.Our goal is to

  • 8/14/2019 IEEE 2 Column Model Format

    4/14

    SPARKS@ELECTROMANIA 2K9

    recoverx from the samples yIRN. If the vectors si do

    not span the entire space , then there are manypossible signalsx consistent with y. More specifically,if we define by S the sampling space spanned by thevectors si, then clearly S* v= 0 for any v

    S*.Therefore, if S*. is not the trivial space then addingsuch a vector v to any solutionxof (5) will result inthe same samples y. However, by exploiting priorknowledge on x in many cases uniqueness can beguaranteed. A prior very often assumed is that lies ina given subspace A of . If A and S have the same

    finite dimension, and S

    and A intersect only at the 0

    vector, then x can be perfectly recovered from thesamples y.

    B. Union of SubspacesWhen subspace information is available, perfectreconstruction can often be guaranteed.Furthermore,

    recover can be implemented by a simple lineartransformation of the given samples (5). However,there are many practical scenarios in which we aregiven prior information about x that is not necessarilyin the form of a subspace.Here we focus our attentionon the setting wherex lies in a union of subspaces

    nu=U v i i=1(6)

    where each v i is a subspace. Thus,x belongs to oneof the v i , but we do not know a priori to which one.

    Note that the set u is no longer a subspace. Indeed, ifv i is, for example, a one-dimensional space spanned

    by the vector vi, then U contains vectors of the form

    vi, for some i but does not include their linearcombinations. Our goal is to recover a vector lyingin a union of subspaces, from a given set of samples.In principle, if we knew which subspace belongedto, then reconstruction can be obtained using standardsampling results. However, here the problem is moreinvolved because conceptually we first need to identifythe correct subspace and only then can we recover thesignal within the space. Previous work on samplingover a union focused on invertibility and stabilityresults in some generaliastions which are useful for us.To achieve this goal,we limit our attention to asubclass of (6) for which stable recovery algorithmscan be developed and analyzed. Specifically, we treat

    the case in which each v i has the additional structurev i = AJ

    |j| = k

    (7)

    where { A J , 1 j m} are a given set of disjointsubspaces, and |j| = k denotes a sum over k indices.Thus, each subspace vi corresponds to a differentchoice ofsubspaces AJ that comprise the sum. We assume

    throughout the paper that m and the dimensions d i =dim(AJ ) of the subspaces A J are finite. Given nsamples

    y= S* x..(8)

    and the knowledge that x lies in exactly one of thesubspaces v i ,we would like to recover the unknownsignalx .In this setting, there are

    possible subspaces comprising the

    union.An alternative interpretation of our model is asfollows. Given an observation vector y, we seek asignal xfor which y= S* x and in addition x can bewritten as

    k = x i

    i=1

    . (9)where each i lies in AJ for some index j.A specialcase is the standard CS problem in which x = x is avector of length N, that has a sparse representation in

    a given basis defined by an invertible matrix W. Thus,x = Wc where c is a sparse vector that has at most knonzero elements. This fits our framework bychoosingA i as nthe space.spanned by the ith column of W. In this setting m = N,and there are

    subspaces comprising the union.

    A.Problem Formulation and Main Results

    Given k and the subspaces A i, ,we would like toaddress the following questions:1) What are the conditions on the sampling vectors si,1 i n in order to guarantee that the sampling isinvertible and stable?

    2) How can we recover the unique x (regardless ofcomputational complexity)?

  • 8/14/2019 IEEE 2 Column Model Format

    5/14

    SPARKS@ELECTROMANIA 2K9

    3) How can we recover the uniquex in an efficient andstable manner? However, no concrete methods were

    proposed in order to recoverx. Here we provideefficient convex algorithms that recoverx in a stableway for arbitrary k under appropriate conditions on the

    sampling functions si another spaces A i. My resultsare based on an equivalence between the union ofsubspaces problem assuming (7) and that of recovering

    block-sparse vectors. This allows us to recoverx fromthe given samples by first treating the problem ofrecovering a block k-sparse vector c from a given setof measurements. This relationship is established inthe next section. In the reminder of the paper wetherefore focus on the block k-sparse model anddevelop our results in that context. In particular, weintroduce a block RIP condition that ensuresuniqueness and stability of our sampling problem. Wethen suggest an efficient convex optimization problemwhich approximates an unknown block-sparse vector

    c. Based on block RIP we prove that c can berecovered exactly in a stable way using the proposedoptimization program. Furthermore, in the presence ofnoise and modeling errors, this algorithm canapproximate the best block-k sparse solution.

    D. UNIQUENESS AND STABILITYIn this section we study the uniqueness and stability ofour sampling method. These properties are intimatelyrelated to the RIP, which we generalize here to the

    block-sparse setting.The first questi we address is thatof uniqueness, namely conditions under which a block-sparse vector c is uniquely determined by themeasurement vector y = Dc.

    Proposition 1: There is a unique block-k sparse vectorc consistent with the measurements y = Dc if and onlyif Dc 0 for every c 0 that is block 2 k-sparse.

    Proof: The proof follows from [22, Proposition 4]. Wenext address the issue of stability. A sampling operatoris stable for a set T if and only if there exists constants> 0, < such that II x 1 x 2 II

    2

  • 8/14/2019 IEEE 2 Column Model Format

    6/14

    SPARKS@ELECTROMANIA 2K9

    (22)

    where B is a diagonal matrix that results in unit-normcolumns of D, i.e., B = diag (1, 15, 1, 1, 1, 12 )1/2 . Inthis example m = 3 and I = {d1 = 2,d2 = 2,d3 = 2}.Suppose that c is block-1 sparse, which corresponds toat most two non-zero values. Brute-force calculations

    show that the smallest value of 2 satisfying thestandard RIP (20) is 2 = 0.866. On the other hand, the

    block-RIP (21) corresponding to the case in which thetwo non-zero elements are restricted to occur in one

    block is satisfied with 1|I = 0.289. Increasing thenumber of non-zero elements to k = 4, we can verifythat the standard RIP (20) does not hold for any 4 [0,1). Indeed, in this example there exist two 4-sparsevectors that result in the same measurements. Incontrast, 2|I = 0.966 satisfies the lower bound in (21)when restricting the 4 non-zero values to two blocks.Consequently, the measurements y = Dc uniquelyspecify a single block-sparse c. In the next section, wewill see that the ability to recover c in acomputationally efficient way depends on the constant2k|I in the block RIP (21). The smaller the valueof2k|I , the fewer samples are needed in order toguarantee stable recovery. Both standard and blockRIP constants k,k|I are by definition increasing withk. Therefore, it was suggested in [12] to normalizeeach of the columns of D to 1, so as to start with 1 = 0.In the same spirit, we recommend choosing the basesforA I such that D = S* A has unit-norm columns,corresponding to 1|I = 0.

    B. Recovery MethodWe have seen that if D satisfies the RIP (21) with 2k

    < 1, then there is a unique block-sparse vector cconsistent with (16). The question is how to find c inpractice. Below we present an algorithm that will in principle find the unique c from the samples y.Unfortunately, though, it has exponential complexity.In the next section we show that under a strongercondition on 2k we can recover c in a stable andefficient manner.Our first claim is that c can be

    uniquely recovered by solving the optimizationproblem

    Min

    . (23)To show that (23) will indeed recover the true value ofc, suppose that there exists a c such that Dc = y and IIcII 0,I

  • 8/14/2019 IEEE 2 Column Model Format

    7/14

    SPARKS@ELECTROMANIA 2K9

    Now, since the square of a root of unity is an

    root of unity, we have that

    and hence

    If we call the two sums demarcated above

    and

    respectively,then we have

    Note that each of

    and

    for

    is in itself a discrete Fourier transform overN/2=Mpoints.How does this help us?Well

    and we can also write

    Thus, we can compute an N-point DFT by dividing itinto two parts:

    The first half of F(u) for

    can be found fromEqn. 28,

    The second half for

    can be found simply be reusing the sameterms differently as shown by Eqn.30.

    This is obviously a divide and conquer

    method.

    To show how many operations this requires, let T(n) be

    the time taken to perform a transform of size, measured by the number of multiplications

    performed. The above analysis shows that

    10

    the first term on the right hand side coming from thetwo transforms of half the original size, and the secondterm coming from the multiplications of

    by

    .Induction can be used to prove that

    A similar argument can also be applied to the numberof additions required, to show that the algorithm as awhole takes time.

    .Also Note that the same algorithm can be used with alittle modification to perform the inverse DFT too.Going back to the definitions of the DFT and itsinverse,

    10

  • 8/14/2019 IEEE 2 Column Model Format

    8/14

    SPARKS@ELECTROMANIA 2K9

    and

    If we take the complex conjugate of the second

    equation, we have that

    This now looks (apart from a factor of 1/N) like aforward DFT, rather than an inverse DFT. Thus tocompute an inverse DFT,

    take the conjugate of the Fourier space data,

    put conjugate through a forward DFT

    algorithm,

    take the conjugate of the result, at the same

    time multiplying each value byN.

    2D Case :The same fast Fourier transform algorithm can be used-- applying the separability property of the 2Dtransform.

    Rewrite the 2D DFT as

    The right hand sum is basically just a one-dimensionalDFT ifx is held constant. The left hand sum is thenanother one-dimensional DFT performedwith thenumbers that come out of the first set of sums.

    So we can compute a two-dimensional DFT by

    performing a one-dimensional DFT for each

    value ofx, i.e. for each column off(x,y), then

    performing a one-dimensional DFT in the

    opposite direction (for each row) on the

    resulting values.This requires a total of 2 N one dimensionaltransforms, so the overall process takes time

    3.4. STORAGE OF SAMPLED SIGNALSFor storage of sampled signals I have used buffer asdescribed below.I have designed a non-inverting

    amplifier with a gain of exactly 1.The gain for a non-inverting amplifier is given by the formula :

    So, if we make R2 zero, and R1 infinity, we'll have anamp with a gain of exactly 1. How can we do this? Thecircuit is surprisingly simple. Here, R2 is a plain wire,which has effectively zero resistance. We can think ofR1 as an infinite resistor -- we don't have anyconnection to ground at all.This arrangement is calledan Op-Amp Follower, or Buffer. The buffer has anoutput that exactly mirrors the input (assuming it'swithin range of the voltage rails), so it looks kind ofuseless at first.

    However, the buffer is an extremely useful circuit,since it helps to store the signal for sometime . Theinput impedance of the op-amp buffer is very high:close to infinity. And the output impedance is verylow: just a few ohms.

    3.5.SPARSE RECOVERY BY 890 ALGORITHMUSING SPARCO TOOLBOX OF MATLAB

    Now in this section ,I am giving a brief account ofSPARCO with emphasis on applicatiopn part .Sparcois organized as a fexible framework providing test

    problems for sparse signal recon-struction as well as alibrary of operators and tools. The problem suitecurrently contains 25 problems and 28 operators.Thelatest version of Sparco and releted stuffs likeinstallation guides ,prerequisites code forc sparase

  • 8/14/2019 IEEE 2 Column Model Format

    9/14

    SPARKS@ELECTROMANIA 2K9

    MRIB toolbox ,test problems packaged with the GPSRsolver can be obtained fromwww.cs.ubc.ca/labs/scl/sparco.Also an open source

    pacakage Rice Wavelet Toolbox can be of great help.A brief description of various sparco operators is as

    follows :

    At the core of the Sparco architect architecture is alarge library of linear operators. Where possible,specialized code is used for fast evaluation of matrix-vector multiplications. Once an operator has beencreated D = opDCT(128)matrix-vector products with the created operator can

    be accessed as follows:

    y = D(x,1); % gives y := Dxx = D(y,2); % gives x := DtyA full list of the basic operators available in the Sparcolibrary is given in Tables 3 and 4:

    Matlab classes can be used to overload operationscommonly used for matrices so that the objects in thatclass behave exactly like explicit matrices. Althoughthis mechanism is not used for the implementation ofthe Sparco operators, operator overloading can providea very convenient interface for the user. To facilitatethis feature, Sparco provides the function classOp:

    Matlab function Description:

    opBinary binary (0/1) ensembleopBlockDiag compound operator with operators on the diagonalopBlur two-dimensional blurring operator

    opColumnRestrict restriction on matrix columnsopConvolve1d one-dimensional convolution operatoropCurvelet2d two-dimensional curvelet operatoropDCT one-dimensional discrete cosine transformopDiag scaling operatoropDictionary compound operator with operators abuttedopDirac identity operatoropFFT one-dimensional FFTopFFT2d two-dimensional FFTopFFT2C centralized two-dimensional FFTopFoG subsequent application of a set of operatorsopGaussian Gaussian ensembleopHaar one-dimensional Haar wavelet transformopHaar2d two-dimensional Haar wavelet transformopHeaviside Heaviside matrix operatoropKron Kronecker product of two operatorsopMask vector entry selection maskopMatrix wrapper for matricesopPadding pad and unpad operators equally around each sideopReal discard imaginary componentsopRestriction vector entry restrictionopSign sign-ensemble operatoropWavelet wavelet operatoropWindowedOp overcomplete windowed operator

    opWindowedOp overcomplete windowed operator

    Table 3: The operators in the Sparco library

    C = classOp(op); % Create matrix object C from opC = classOp(op,'nCprod'); % Additionally, create a global counter variable nCprod

    the main matrix-vector operations are de_ned. In its second form, the classOp function

    http://www.cs.ubc.ca/labs/scl/sparco.Alsohttp://www.cs.ubc.ca/labs/scl/sparco.Alsohttp://www.cs.ubc.ca/labs/scl/sparco.Alsohttp://www.cs.ubc.ca/labs/scl/sparco.Also
  • 8/14/2019 IEEE 2 Column Model Format

    10/14

    SPARKS@ELECTROMANIA 2K9

    These calls take an operator op and return an objectfrom the operator class for whichthe main matrix-vector operations are de_ned. In its

    second form, the classOp functionaccepts an optional string argument and creates aglobal variable that keeps track of thenumber of multiplications with C and CT . Thevariable can be accessed from Matlab's

    base workspace. The following example illustrates theuse ofclassOp:

    F = opFFT(128);G = classOp(F);g1 = F(y,2); % gives g1 := FTyg2 = G'*y; % gives g2 := GTy _ Fty

    Operator type Matlab function

    Ensembles opBinary, opSign,opGaussianSelection opMask,opColumnRestrict, opRestrictionMatrix opDiag, opDirac,opMatrix, opToMatrixFast operators opCurvelet,opConvolve1d, opConvolve2d, opDCT,

    opFFT, opFFT2d,opFFT2C, opHaar, opHaar2d,

    opHeaviside,opWaveletCompound operators opBlockDiag,opDictionary, opFoG, opKron,

    opWindowedOp Nonlinear opReal,opPadding

    Table 4: Operators grouped by type:

    Meta operators:

    Several tools are available for conveniently assemblingmore complex operators from the basic operators. The_ve meta-operators opFoG, opDictionary,opTranspose, opBlockDiag, and opKron take one ormore of the basis operators as inputs, and assemblethem into a single operator:

    H = opFoG(A1,A2,...); % H := A1 _ A2 _ : : : _ An

    H = opDictionary(A1,A2,...); % H := [A1 j A2 j _ _ _ jAn]H = opTranspose(A); % H := ATH = opBlockDiag(A1,A2,...); % H := diag(A1;A2; : : :)

    H = opKron(A1,A2); % H := A1 A2

    A sixth meta-operator, opWindowedOp, is a mixturebetween opDictionary and opBlockDiagv in whichblocks can partially overlap rather than fully(opDictionary), or not at all (opBlockDiag). A furthertwo di_erences are that only a single operator isrepeated and that each operator is implicitly preceded

    by a diagonal window operator.

    Ensemble operators and general matrices:The threeensemble

    operators (see Table 4) can be instantiated by simplyspecifying their dimensions and a mode thatdetermines the normalization of the ensembles. Unliketheother operators in the collection, the ensembleoperators can be instantiated as explicit matrices(requiring O(m_ n) storage), or as implicit operators.When instantiated as implicit operators, the randomnumber seeds are saved and rows and columns aregenerated on the y during multiplication, requiringonly O(n) storage for the normalization coeffcients.

    Selection operators:

    Two selection operators are provided: opMask andopRestriction. In forward mode, the restrictionoperator selects certain entries from the given vectorand returns a correspondingly shortened vector. Incontrast, the mask operator evaluates the dot-productwith a binary vector thus zeroing out the entriesinstead of discarding them, and returns a vector of thesame length as the input vector.

    Fast operators:Sparco also provides support foroperators with a special structure for which fastalgorithms are available. Such operators in the libraryinclude Fourier, discrete cosine, wavelet, two-

    dimensional curvelet, and one-dimensionalconvolution of a signal with a kernel.For example, thefollowing code generates a partial 12 Fouriermeasurement operator (F), a masked version with 30%of the rows randomly zeroed out (M), and adictionary consisting of an FFT and a scaled Dirac

    basis (B):m = 128;

  • 8/14/2019 IEEE 2 Column Model Format

    11/14

    SPARKS@ELECTROMANIA 2K9

    D = opDiag(m,0.1); F = opFFT(m); % D is a diagonaloperator, F is an FFTM = opFoG(opMask(rand(m,1) < 0.7),F); % M is amasked version of FB = opDictionary(F,D); % B = [F D]

    Utilities:For general matrices there are three operators:opDirac, opDiag, and opMatrix. The Dirac operatorcoincides with the identity matrix of desired size.Diagonal matrices can be generated using opDiagwhich takes either a size and scalar, or a vectorcontaining thediagonal entries. General matrix operators can becreated using opMatrix with a (sparse) matrix as anargument.The opToMatrix utility function takes an implicitlinear operator and forms and returns an explicitmatrix. Figure 1 shows the results of using this utility

    function on the operator M and B :Mexplicit = opToMatrix(M); imagesc(Mexplicit);Bexplicit = opToMatrix(B); imagesc(Bexplicit);

    Using the appropriate algorithms and tools I have doneexperimentation which is discussed in next section.

    4. EXPERIMENTASTION AND OBSERVATION :

    4.1. Aim: to reconstruct an image given input image isprovided .

    4.6.Output image is as follows:

    5.RESULTS AND INFERENCES

    Though the experimentation was not so successfulcompletely due to some inevitable and unavoidablecircumstances , the result is still satisfactory.Sincealways there is a

    scope of improvement , design of such a system inpractical form is still a challenge. The

    top view of DSP kit used is shown below :

  • 8/14/2019 IEEE 2 Column Model Format

    12/14

    SPARKS@ELECTROMANIA 2K9

    7. References

    [1] S. Boyd and L. Vandenberghe, ConvexOptimization. Cambridge, U.K.: Cambridge Univ.

    Press, 2004.[2] E. Cands and J. Romberg, Sparsity andincoherence in compressive sampling, Inverse Prob.,vol. 23, no. 3, pp. 969986, June 2007.[3] M. Rudelson and R. Vershynin, On sparsereconstruction from Fourier and Gaussianmeasurements, submitted for publication.[4] J. Shapiro, Embedded image coding usingzerotrees of wavelet sDec. 1993.[5] A. Skodras, C. Christopoulos, and T. Ebrahimi,The JPEG2000 still image compression standard,

    IEEE Signal Processing Mag., vol. 18, pp. 3658,Sept.2001.[6] D. Takhar, J.N. Laska, M.B. Wakin, M.F. Duarte,

    D. Baron, S. Sarvotham,K.F. Kelly, and R.G.Baraniuk, A new compressive imaging cameraarchitecture usingoptical-domain compression, in Proc. SPIE Conf.Computational Imaging IV,San Jose, CA, Jan. 2006,

    pp. 43-52.

    [7] J. Tropp, Just relax: Convex programmingmethods for identifying sparse signals in noise,IEEETrans. Inform. Theory, vol. 52, no. 3, pp. 10301051,2006.[8] M. Vetterli and J. Kovacevic, Wavelets andSubband Coding. Englewood Cliffs, NJ: Prentice-Hall,1995.[9] R. Baraniuk, H. Choi, F. Fernandes, B. Hendricks,R. Neelamani, V.Ribeiro, J. Romber, R. Gopinath, H.-T. Guo, M. Lang, J. E. Odegard, and D.Wei, RiceWaveletToolbox.http://www.dsp.rice.edu/software/rwt.shtml ,1993.[10] E. van den Berg and M. P. Friedlander, In pursuitof a root, Tech. Rep.TR 2007-19, Department ofComputer Science, University of British Columbia,June 2007.[11] R. Boisvert, R. Pozo, K. Remington, R. Barrett,and J. Dongarra, Matrix Market: A web resource fortest matrix collections, in The quality of

    numerical software: assesment and enhancement, R. F.Boisvert, ed.,Chapman & Hall, London 1997, pp. 125{137.

    [12] E. J. Candes, Compressive sampling, inProceedings of the International Congress ofMathematicians, 2006.[13] E. J. Candes, L. Demanet, D. L. Donoho, and L.-X. Ying, CurveLab.

    http: //www.curvelet.org/, 2007.[14] M. Lustig, D. L. Donoho, J. M. Santos, and J. M.Pauly, Compressed sensing MRI, 2007. Submitted toIEEE Signal Processing Magazine.[15] D. Malioutov, M. C etin, and A. S. Willsky, A

    sparse signalreconstruction prespective for sourcelocalization with sensor arrays, IEEE Trans. Sig. Proc.,53 (2005), pp. 3010{3022.[16] S. Mendelson, A. Pajor, and N. Tomczak-Jaegermann, Uniformuncertainty principle for Bernoulli and subgaussianensembles, 2007. arXiv:math/0608665.[17] B. K. Natarajan, Sparse approximate solutions tolinear systems, SIAM J.Comput., 24 (1995), pp.227{234.[18] S. Mendelson, A. Pajor, and N. Tomczak-Jaegermann, Uniform uncertainty principle forBernoulli and subgaussian ensembles,2007.arXiv:math/0608665.

    [19] B. K. Natarajan, Sparse approximate solutions tolinear systems, SIAM J Comput., 24 (1995), pp.227{234.[20] S. Mendelson, A. Pajor, and N. Tomczak-Jaegermann, Uniformuncertainty principle for Bernoulli and subgaussianensembles, 2007.arXiv:math/0608665.[21] B. K. Natarajan, Sparse approximate solutions tolinear systems, SIAM J.Comput., 24 (1995), pp.227{234}.[22] C. A. Shannon and W. Weaver, The mathematicaltheory ofcommuni-cation. University of Illinois Press,1949.[23] I. F. Gorodnitsky and B. D. Rao, Sparse signalreconstruction fromlimited data using FOCUSS: a re-weighted minimumnorm algorithm, IEEE Transactions on SignalProcessing, vol. 45, pp. 600616, March 1997.[24] M. Vetterli, P. Marziliano, and T. Blu, Samplingsignals with finite rate of innovation, IEEETransactions on Signal Processing, vol. 50, no. 6, pp.14171428, 2002.[25] E. Cand`es and J. Romberg, Quantitative robustuncertainty principles and optimally sparsedecompositions, Foundations of Comput. Math,vol. 6,no. 2, pp. 227 254, 2006.

    [26] E. Cand`es and T. Tao, Near optimal signalrecovery from random projections: Universal encoding strategies?, IEEETrans. on Information Theory, vol. 52, no. 12, pp. 5406

    5425, 2006.[27] D. Donoho, Compressed sensing, IEEE Trans.on Information Theory,vol. 52, no. 4, pp. 12891306,2006.

  • 8/14/2019 IEEE 2 Column Model Format

    13/14

    SPARKS@ELECTROMANIA 2K9

    [28] J. A. Tropp, A. C. Gilbert, and M. J. Strauss,Algorithms for simultaneous sparse approximation.Part I: Greedy pursuit, Signal Processing,vol. 86, pp.572588, 2006.[29] J. A. Tropp, Algorithms for simultaneous sparse

    approximation. PartII: Convex relaxation, SignalProcessing, vol. 86, pp. 589602, 2006.[30] K. S. R. Gribonval, H. Rauhut and P.Vandergheynst, Atoms of allchannels, unite! Average case analysis of multi-channel sparse recovery using greedy algorithms,Journal of Fourier analysis and applications,Publishedonline, DOI:10.1007/s00041-008-9044-y, October,2008.[31] E. Cand`es, Compressive sampling, inProceedings of the International Congress ofMathematics, (Madrid, Spain), Vol. 3, pp. 1433-1452,2006.[32] E. Cand`es, Compressive sampling, in

    Proceedings of the International Congress ofMathematics, (Madrid, Spain), Vol. 3, pp. 1433-1452,2006.[33] E. Cand`es, Compressive sampling, inProceedings of the International Congress ofMathematics, (Madrid, Spain), Vol. 3, pp. 1433-1452,2006.[34] E. Cand`es, Compressive sampling, inProceedings of the International Congress ofMathematics, (Madrid, Spain), Vol. 3, pp. 1433-1452,2006[35] E. Cand`es, Compressive sampling, inProceedings of the International Congress ofMathematics, (Madrid, Spain), Vol. 3, pp. 1433-1452,2006.[36] E. Cand`es, Compressive sampling, inProceedings of the International Congress ofMathematics, (Madrid, Spain), Vol. 3, pp. 1433-1452,2006.[37] A. Aldroubi and K. Grochenig, Nonuniformsampling and reconstruction in shift-invariant spaces,SIAM Review, vol. 43, no. 4, pp. 585620, 2001.[38] C. Zhao and P. Zhao, Sampling theorem andirregular sampling theorem for multiwaveletsubspaces, IEEE Trans. Signal Process., vol. 53, no.2,pp. 705713, Feb. 2005.[39] P. Zhao, C. Zhao, and P. G. Casazza, Pertubation

    of regular sampling in shift-invariant spaces forframes,18

    IEEE Trans. Inf. Theory, vol. 52, no. 10, pp.

    46434648, Oct. 2006.[40] M. Vetterli, P. Marziliano, and T. Blu, Samplingsignals with finiterate of innovation, IEEE Trans. Signal Process., vol.50, no. 6, pp.14171428, Jun. 2002

    18

    .[41] I. Maravic and M. Vetterli, Sampling andreconstruction of signals with finite rate of innovationin the presence of noise, IEEE Trans. Signal Process.,vol. 53, no. 8, pp. 27882805, Aug. 2005.[42] P. Dragotti, M. Vetterli, and T. Blu, Sampling

    moments and reconstructing signals of finite rate ofinnovation: Shannon meets Strang-Fix, IEEE Trans.Signal Process., vol. 55, no. 5, pp. 17411757, May2007.[43] D. L. Donoho, M. Vetterli, R. A. DeVore, and I.Daubechies, Datacompression and harmonic analysis, IEEE Trans. Inf.Theory, vol. 44,no. 6, pp. 24352476, Oct. 1998.[44] S. Mallat, A Wavelet Tour of Signal Processing,2nd ed. San Diego:Academic Press, 1999.[45] A. M. Bruchstein, T. J. Shan, and T. Kailath, Theresolution of

    overlapping echos, IEEE Trans. Acoust., Speech, and

    Signal Process.,vol. 33, no. 6, pp. 13571367, Dec.1985.[46] A. Aldroubi and K. Grochenig, Nonuniformsampling and reconstruction in shift-invariant spaces,SIAM Review, vol. 43, no. 4, pp. 585620, 2001.[47] C. Zhao and P. Zhao, Sampling theorem andirregular sampling theorem for multiwaveletsubspaces, IEEE Trans. Signal Process., vol. 53, no.2,pp. 705713, Feb. 2005.[48] P. Zhao, C. Zhao, and P.G. Casazza, Pertubation of regular sampling in shift-invariant spaces for frames, IEEE Trans. Inf. Theory,vol. 52,no. 10, pp. 46434648, Oct. 2006.[49] M. Vetterli, P. Marziliano, and T. Blu, Samplingsignals with finiterate of innovation, IEEE Trans. Signal Process., vol.50, no. 6, pp.14171428, Jun. 2002.[50] I. Maravic and M. Vetterli, Sampling andreconstruction of signals withfinite rate of innovation in the presence of noise,IEEE Trans. Signal Process., vol. 53, no. 8, pp. 27882805, Aug. 2005.[51] P. Dragotti, M. Vetterli, and T. Blu, Samplingmoments andreconstructing signals of finite rate ofinnovation: Shannon meets Strang-Fix, IEEE Trans.Signal Process., vol. 55, no. 5, pp. 17411757, May2007.

    [52] D. L. Donoho, M. Vetterli, R. A. DeVore, and I.Daubechies, Datacompression and harmonic analysis, IEEE Trans. Inf.Theory, vol. 44,no. 6, pp. 24352476, Oct. 1998.[53] S. Mallat, A Wavelet Tour of Signal Processing,2nd ed. San Diego: Academic Press, 1999.

    [54]. A. M. Bruchstein, T. J. Shan, and T. Kailath,The resolution of overlapping echos, IEEE Trans.

  • 8/14/2019 IEEE 2 Column Model Format

    14/14

    SPARKS@ELECTROMANIA 2K9

    Acoust., Speech, and Signal Process.,vol. 33, no. 6, pp.13571367, Dec. 1985.