6
260 IEEE TRANSACTIONS ON AUTOHATIC CONTROL, VOL. AC-27, NO. 1, FEBRUARY 1982 Note that for the structure A Decentralized Strategy for Resource Allocation 0 BENJAMIN FRIEDLANDER k=[ :;; ;;; Absh-acr -An algorithm is presented for optimal resource allocation in large-scale systems. The approach is based on a two-step noniterative the fixed mode h*=O remains, since only the fourth and sixth rows are decomposition technique: subsystem optimization is performed locally and altered. the results are combined to proride a global optimum. The unique feature By using theprocedure described at the end of the last section. we find of the approach is that it performs the optimization in a completely that decentralized manner, Rithout a central coordinator. produces no fixed modes. The next example is taken from Davison and Ozguner [5). They provide a system with two interconnected subsystems where the triples (C, A, B). (C, . A,,. B,), i = 1.2 are all controllable and observable. J2=[1 0]x2 The above system has a decentralized fixed mode atthe origin. With K= block diag (K, I. K,:) we obtain the following closed-loop matrix: A+ BUC= I. IXIRODUCTION The problem of optimally allocating a finite amount of resources to differentconsumers of these resources has been studied extensively in variouscontextssuch as electric power netxvorks [I]. water [2]. airline scheduling [3], and energy [4]. Most of the solutions proposed for this problem are centralized. All the pertinent data are collected at a central location where the full optimization problem is solved. using one of the many available techniques of linear or nonlinear programming [5]-[7]. In manysituations h s tqpe of solutionhasdistinct disadvantages. It re- quires communication of relatively large amounts of data; the solution may require excessive amounts of computation for high-dimensional problems. and the procedure lacks flexibility due to its centralized nature. Various decomposition techniques have been proposed to achieve both computational savings and decentralization. Some examples are: the primal resource-directive approaches suggested by Geoffrion [ 171 and others [21]. primal coordination [18]. feasible optimization methods [19], and model coordination [20]. AI1 of these techniques are based on the decomposition of a large probleminto a multiplicity of smaller problems. which canbe solved relatively easily. The computation of the optimal solution involves a central coordinator and an iterative procedure which proceeds as follows. The coordinator provides eachsubsystem with a set of values forits interaction variables. The subsystem performs local optimization and passes the results ( e g , the gradients or subgradients of the local optimal returns with respect to these interaction variables) to the coordinator. It is easy to see that X*=O satisfies (5) for the index i = 1 and that for Based on this information, the coordinator determines new improved values for the interaction variables. This iterative twc-level process is the fixed mode remains, while for it is removed v. CoNCLUsIoNs In this brief paper we have developed a new characterization of the decentralized fixed modes in terms of block diagonally dominant matrices. Based on such characterization we have provided a way of choosing new structures for the feedback matrix with reduced information exchange. It is our belief that t h s can be of help in designing partiall!. decentralized control systems. REFEREKES [I] S. H. W-ang and E J. Davison. “On the stabilizauon of decentralized cuntrol s>stems.” [?I J. P Corlmat and A. S. Morse. “Decentralized control of linear multivanable _cyatems.” IEEE Truns. -1uronur. Conrr.. vol AC-IR. pp.473-478. Oct 1973 131 D G Fcingoldand R. S. Varga. “Blmk diagonallydominant matrices and generaliza- Auromurl‘u. wl. 12. pp. 479-493. 1976. (4; S H Wang. “An example in decentralized control s>stems.“ IEEE Trum ..(arrmur tion 01 the Gershgorin circle theorem.” Pucrfrc. f. Marh.. vol 12. pp. 1241-1250. 1962. 151 t J Dabxon and U. Ozguner. “Characterization of decentralizd fixed modes for (hnrr, bo1 AC-23. p.938. Oct. 1978. interconnected systems.” Preprlrars IFACi’X1 Congress. Kvoto. Japan. 1981. based on variational ideasand generally leads to substantialcomputa- tional savings due to the reduction of problem dimensionality. In this correspondence we describe a somewhat different approach which involves a two-step noniterative technique. In the first stepthe subproblems are solved for a whole set of interaction values. In the second step these partial solutions are successively combined to produce the globally optimum solution. An important feature of this approach is that it does not require a coordinator. The solution is truly decentralized in the sense that all subsystems play similar roles and no subsystem is required to coordinate the actions of others. This feature makes thetechnique more flexible than the variational techniques, at the price of some increase in computation and storage requirements. The basic idea underlying our approach is closely related to the spatial dynamic programming concept Lvhich was developed in [8]-[ IO]. Similar ideas can also be found in [ 111. HoLvever. these studies emphasize the computational aspects. rather than the decentralized aspects of the solution. In order to describe our approach we \vi11 formulate a simple prototlye problem. To simplifl- the discussion we cast the problem in terms of a specific example involving a chain of distributors and warehouses. How- ever. the approach we present is more general and can be applied to other problems as will be discussed in Section V. We should note that algorithms with similar features were proposed by other authors. In [22] a more general problem is presented Lvhich pos- tulates the presence of a coordinator who allocates resources Lvhich are not controlled by the subsystems. The subsystems solve their subproblems for all values of the resources Lvhich could be allocated to them by the Manuwript recnwd March IO. 1980: revlscd September 2, 19P0 and April 22. 1981 The author IS aith Systems Control Technology. Inc . Palo Alto. CA 94304 00lX-9286,i82,i0200-0260$00.75 I1982 IEEE

A decentralized strategy for resource allocation

  • Upload
    b

  • View
    212

  • Download
    0

Embed Size (px)

Citation preview

Page 1: A decentralized strategy for resource allocation

260 IEEE TRANSACTIONS ON AUTOHATIC CONTROL, VOL. AC-27, NO. 1, FEBRUARY 1982

Note that for the structure A Decentralized Strategy for Resource Allocation

0 BENJAMIN FRIEDLANDER

k=[ :;; ;;; Absh-acr -An algorithm is presented for optimal resource allocation in

large-scale systems. The approach is based on a two-step noniterative the fixed mode h*=O remains, since only the fourth and sixth rows are decomposition technique: subsystem optimization is performed locally and altered. the results are combined to proride a global optimum. The unique feature

By using the procedure described at the end of the last section. we find of the approach is that it performs the optimization in a completely that decentralized manner, Rithout a central coordinator.

produces no fixed modes. The next example is taken from Davison and Ozguner [5). They provide

a system with two interconnected subsystems where the triples (C, A , B). (C, . A , , . B , ) , i = 1.2 are all controllable and observable.

J 2 = [ 1 0]x2

The above system has a decentralized fixed mode at the origin. With K= block diag ( K , I. K , : ) we obtain the following closed-loop matrix:

A + BUC=

I. IXIRODUCTION

The problem of optimally allocating a finite amount of resources to different consumers of these resources has been studied extensively in various contexts such as electric power netxvorks [I] . water [2]. airline scheduling [3], and energy [4]. Most of the solutions proposed for this problem are centralized. All the pertinent data are collected at a central location where the full optimization problem is solved. using one of the many available techniques of linear or nonlinear programming [5]-[7]. In many situations h s tqpe of solution has distinct disadvantages. It re- quires communication of relatively large amounts of data; the solution may require excessive amounts of computation for high-dimensional problems. and the procedure lacks flexibility due to its centralized nature.

Various decomposition techniques have been proposed to achieve both computational savings and decentralization. Some examples are: the primal resource-directive approaches suggested by Geoffrion [ 171 and others [21]. primal coordination [18]. feasible optimization methods [19], and model coordination [20].

AI1 of these techniques are based on the decomposition of a large problem into a multiplicity of smaller problems. which can be solved relatively easily. The computation of the optimal solution involves a central coordinator and an iterative procedure which proceeds as follows. The coordinator provides each subsystem with a set of values for its interaction variables. The subsystem performs local optimization and passes the results ( e g , the gradients or subgradients of the local optimal returns with respect to these interaction variables) to the coordinator.

It is easy to see that X * = O satisfies ( 5 ) for the index i = 1 and that for Based on this information, the coordinator determines new improved values for the interaction variables. This iterative twc-level process is

the fixed mode remains, while for

i t is removed

v . CoNCLUsIoNs

In this brief paper we have developed a new characterization of the decentralized fixed modes in terms of block diagonally dominant matrices. Based on such characterization we have provided a way of choosing new structures for the feedback matrix with reduced information exchange. It is our belief that t h s can be of help in designing partiall!. decentralized control systems.

REFEREKES

[ I ] S. H. W-ang and E J. Davison. “On the stabilizauon of decentralized cuntrol s>stems.”

[ ? I J. P Corlmat and A. S. Morse. “Decentralized control of linear multivanable _cyatems.” I E E E Truns. -1uronur. Conrr.. vol AC-IR. pp. 473-478. O c t 1973

131 D G Fcingold and R. S . Varga. “Blmk diagonally dominant matrices and generaliza- Auromurl‘u. w l . 12. pp. 479-493. 1976.

(4; S H Wang. “An example in decentralized control s>stems.“ I E E E Trum ..(arrmur tion 01 the Gershgorin circle theorem.” Pucrfrc. f. Marh.. vol 12. pp. 1241-1250. 1962.

151 t J Dabxon and U. Ozguner. “Characterization of decentralizd fixed modes for ( h n r r , bo1 AC-23. p. 938. Oct. 1978.

interconnected systems.” Preprlrars IFACi’X1 Congress. Kvoto. Japan. 1981.

based on variational ideas and generally leads to substantial computa- tional savings due to the reduction of problem dimensionality.

In this correspondence we describe a somewhat different approach which involves a two-step noniterative technique. In the first step the subproblems are solved for a whole set of interaction values. In the second step these partial solutions are successively combined to produce the globally optimum solution. An important feature of this approach is that it does not require a coordinator. The solution is truly decentralized in the sense that all subsystems play similar roles and no subsystem is required to coordinate the actions of others. This feature makes the technique more flexible than the variational techniques, at the price of some increase in computation and storage requirements. The basic idea underlying our approach is closely related to the spatial dynamic programming concept Lvhich was developed in [8]-[ IO]. Similar ideas can also be found in [ 111. HoLvever. these studies emphasize the computational aspects. rather than the decentralized aspects of the solution.

In order to describe our approach we \vi11 formulate a simple prototlye problem. To simplifl- the discussion we cast the problem in terms of a specific example involving a chain of distributors and warehouses. How- ever. the approach we present is more general and can be applied to other problems as will be discussed in Section V.

We should note that algorithms with similar features were proposed by other authors. In [22] a more general problem is presented Lvhich pos- tulates the presence of a coordinator who allocates resources Lvhich are not controlled by the subsystems. The subsystems solve their subproblems for all values of the resources Lvhich could be allocated to them by the

Manuwript recnwd March IO. 1980: revlscd September 2, 19P0 and April 22. 1981 The author IS aith Systems Control Technology. Inc . Palo Alto. CA 94304

00lX-9286,i82,i0200-0260$00.75 I1982 IEEE

Page 2: A decentralized strategy for resource allocation

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. AC-27. NO. 1, FEBRUARY 1982 26 1

coordinator. The coordinator then uses their solutions to allocate the resources he control>,. This is very similar to the algorithm presented here. except that [22] requires the global optimization to be performed by the central coordinator. Algorithms requiring no central coordinator have been prebented by Hn er (I/. [23] in the context of resource allocation and by Gallager [24] and others in network flow optimization. These algc- rithms are. however, different from the one presented here.

11. PROBLEM STATEMENT

Consider a chain of distributors selling some product. The chain consists of sales offices (0,) and warehouses (kt<). Each office can use supplies from one or more warehouses and each warehouse serves one or more sales offices. The network of offices and warehouses is depicted in Fig. I . where each line represents an interaction between a warehouse and an office. The office can place orders to that warehouse, and the warehouse will deliver to the customer. A sales office is not allowed to place orders at warehouses to which it is not connected. The product which is sold by the chain is in great demand. In fact. typically the total demand is greater than the available supply. Therefore. the management of the chain is faced with the problem of how to allocate its resources (product units) in an optimal way. h o as to maximize its return or profit.

The total return R is given by the sum of the returns R , of the sales made at the M offices. i.e.,

.M

R = R , . ( 1 ) r = l

The return R , is a function of the allocation of product units to customers. For example.

,x,

R , = f( I,( n ) . n ) (2) ,> = 1

whcre

X, =number of customers of office i

I, ( n j =number of product units supplied by office i to customer ? I ,

The return function f is a monotomically increasing function of / , ( ) I ) . It may be alho dependent on n (e.&., different customers may be willing to pay different prices for the same product). The customers want to receive a certain number of product units. but due to the scarcity of the product they will settle for a smaller (or larger) amount.

In a more complicated situation the return R , may depend on the particular warehouse from which the product was supplied. This may be the c a x if the warehouses are allowed to fix their rates independently. or i f transportation costs (which depend on the relative location of each warehouse) are significant. In this case

,v, I .

R , = 2 f ( I , ' ( t l ) . n . j ) (3) , 7 = 1 / = 1

tvherc

I,'( n ) =number of product units from warehouse j

sold by office i to customer n

L =total number of warehouses.

Let us also denote s ,

I,': 2 18'( n ) = total number of product units I , = I

allocated to office i from Xvarehouse j I

I , = 2 I,' =total number of product units assigned to office I / = I

S'= total number of product units available at warehouse]

5

i Fig 1. An oihce/warehouse network.

The problem then is how to allocate product units to customers. i.e.. how to choose (I,'( n ) ) so as to maximize the total return R subject to the constraint

This constraint simply states that the total number of product units allocated by a warehouse should not exceed the number of units it has in stock.

It should be noted that the complexity of the problem is caused to a large extent by the fact that warehouses can supply more than one office. If each office had its owrn dedicated warehouses, it could maximize its return /ndeperldenr/r of all the other offices. However. such a network structure usually leads to inefficient utilization of resources. since some offices may have more orders than they can meet. while others may be overstocked. Allowing offices to share warehouses will in general lead to increased resource utilization and therefore to increased total return, at the price of a more complicated "control" problem. In fact. a natural question is how to design the offiee/warehouse network in a good way. hut that problem is outside the scope of this correspondence.

111. THE OPTIMIZATION ALGORITHM

The offiee/warehouse network can be considered as a system consisting of many interconnected subsystems. The proposed approach is based on a decomposition of the global optimization problem (for the whole system) into a sequence of smaller optimization problems (for each subsystem). Note that the problem stated in Section I1 has two parts as follows:

I ) assignment of a total number of product units from the warehouses to a given office. Le.. determination of {I,');: ,-

2) allocation of product units to customers by each office. i.e.. de- termination of ( I,( n)):~ ,.

A straightforward optimization solution will find the optimal values of I:( n ) from which both I,' and I,( n ) can be uniquely determined. In other words, the direct approach solves simultaneously the allocation problems mentioned in I ) and 2) above.

There is. however, another way of approaching the problem. Assume for a moment that the optimal values of 1: are somehow kno\vn. Le.. each office knows how may product units it will get from the related warehouses, Then the remaining problem of assigning product units to customers can be solved independently by each office. We shall refer to I: as the interaction variables. since it is precisely through these variables that the different subsystems (offices) are coupled.

If the interaction variables are fixed. the problem decomposes into a collection of independent optimization problems. Each of these problems is of low dimension and can be solved relatively easily. The difficulty lies. of course. in the fact that the optimal values of the interaction variables I,' are not known.

A \vay of circumventing this difficulty is for each subsystem to compute the optimal allocation ( I , ( ?I)}:& I. parameterized on all possible value> of

Page 3: A decentralized strategy for resource allocation

262

TABLE I

IEEE TRANSACTIONS ON AUTOMATIC COhmOL, VOL. AC-27, NO. 1, FEBRUARY 1982

TABLE I1

the interaction {I:};= ,. Thus, instead of obtaining a single solution we \vi11 have a whole set of solutions, one of which is globally optimal. At first glance. this seems to complicate rather than simplify the situation. However. as we will show neat. the globally optimal solution can be easily found by combining the parameterized solutions of different subsystems. Furthermore. the total computational requirements are significantly re- duced since each subsystem solves a lowdimensional problem. Replacing a single problem by a parametric class of problems is often referred to as -10 3 the embedding approach. The idea of embedding has proven to be verq'

L~ , I2 are the

powerful analytical tool; see for example [ 121 and [13].

depicted in Fig. I. The solution may proceed as follows.

own return R , parameterized on the interaction variables. Consider sub-

-

c --- / \

\ interaction variables To illustrate how this embedding approach works consider the example of the composite

Step I - Local Oprinzization: All IM subsystems (offices) optimize their I suSsystem 1.2.

system I. For each choice of If . I;o. it will compute \ . 3.

\ R , ( I ~ . I,")= max R , ( 5 4 \

( l , ( l f ) ) L , \ '.

subject to the constraint

.x,

I l ( ~ z ) < I ~ - I ; o + s ~ . (5b) n = l ~ -- -

/ / -x.

The maximization is performed over all possible allocations 1:". I3 5 are the 11( I); ... I , ( K,). The result will be a table containing the value of the locally maximal return E, for each choice of interaction variables (see interaction variables Table I). I

oE the composite ' I subsystem 1,2,3.

Similarly for subsystem 2 we compute

R , ( I i . I ; ) = max R 2 ( 6 4 , I ( 1 2 ( n ) ) 2 1

Y:0

I 02 ! subject to

and for subsystem 3

subject to

and so on Step ?-Global Optin~a~tion: Here. subsystems are combined one by

one into a composite subsystem. and optimized over the interaction variables a.hich are now internal to the composite sy-stem. Subsystems can bc combined in many different nays. but w-e assume they are combined in the natural order: 1.2.. ....

First. we combine subsystems 1 and 2 by computing their composite optimal return R12

subject to

Fig. 7. Combining the partial solutions

(7b) In other words, is the optimal return of the composite subsystem, parameterized on the remaining interaction variables between the com- posite and the rest of the system; see Fig. 2. Note that the computation of R , , is done by a simple merging of the tables of Rl and E,. All we have to do is to look at the entries for a given 1;'. I: and choose the combination of I:. 14 [subject to (Rb)] for which the sum of the returns E, - K r is maximized.

Pu'ext. we combine the composite subsy-stem (1.2) with subsystem 3 (see Fig. 2):

-

subject to

I : + I: G s'. (9b)

This procedure will continue until the composite system encompasses

Page 4: A decentralized strategy for resource allocation

IEEE TRANSACTIONS ON AUTOMATIC COhTROL, VOL. AC-27, NO. 1, FEBRUARY

all the subsystems. At this point the optimal values of all the interaction variables I,! have been determined.

This example illustrates the main features of the proposed algorithm. First, a local optimization is performed. parameterized 0x1 the interaction variables. The results of h s local computation are then combined one by one to produce the final global solution. It is possible to describe this procedure in a more formalized mathematical language (see [8] and [9]), but we prefer to leave the description of the algorithm as presented above. Furthermore. we have not discussed in detail how the subsystem optimiza- tion is performed, since any standard technique may be used for that purpose. What we want to emphasize in this correspondence is the decomposition of the problem and the decentralized nature of the solu- tion.

Finally, we should note that the global optimization step does not have to be performed sequentially. It is possible to first consider M / 2 pairs of subproblems and have them combine their Optimization results. This will result in “ 2 composite subsystems, which could be grouped into M/4 pairs to provide a new set of M/4 composite subsystems. and so on. This possibility of parallel computations would more completely make use of the possible distributed processing capabilities, at the price of somewhat reduced flexibility (due to the need to determine a priori the sequence of pair aggregations).

IV. Som IMPLEMENTATION ISSUES

The description of the algorithm in the previous section left out many details. Some of these details are application dependent and will have to be filled in by the reader for his specific problem. In this section we discuss some of the more general issues associated with the proposed algorithm.

A. Cornpututionul Requirements

The total optimization problem is decomposed in our approach into two types of optimization problems: optimization of the subsystem itself and optimization of the interactions of the subsystem with the remainder of the system. In the subsystem optimization problem, it is necessary to find a solution for every sequence of interaction variables. thus the effective dimensionality of the problem is the sum of the subsystem problem dimension and the number of interaction variables for that subsystem. The subsystem optimization has to be solved :M times. where !M =number of offices (subsystems). Let the average dimension of subsys- tem problem be k - p . where k =average number of customers per office and p =average number of warehouses serving each office. (If the return function depends on I,‘( n ) rather than just I,( n ) , k has to be replaced by kp.) The total problem dimension is Mk (or M k p ) . Thus. the solution of a subproblem for all k + p (internal as well as interaction) variables will require at most q(k+J’) operations. where q is some constant (e.g., the number of “quantization levels” of the variables). Solving M subproblems will require - ~ q ( ’ + p ) operations.

Next, we calculate the effort in combining subsystems. For simplicity we assume that as each pair of subsystems is combined. p / 2 interaction variables become internal to the new subsystem. on the average. Kow the joint subsystem again has p interaction variables and the same situation holds when combining it with the next subsystem. Thus, the combined table will have qp entries. To compute each entry we had to look up and add up at most qf” , entries in the subsystem tables. This means that - q 3 P i 2 operations were needed to combine tn‘o subsystems and - Mq‘p/’ operations to combine all subsystems. The total computation count is now evaluated as - M q P ( q k + qp/ ’ ) . In many situations w-e have k > p / 2 . in which case the amount oE computation involved in combining the subsystems is negligible. and the total count is - : k f q ( P + k ) .

For comparison, consider the solution of the problem using an iterative two-level procedure as described in the introduction and in [17]-[22]. If each subproblem solution requires at most - qh operations per iteration. then 44 subproblems and :V iterations will require Mqh,V operations. Thus. the relative computational complexity of the two methods is di- rectly related to the ratio q P [ I + q ‘ P / ’ - ” ’ ] / : V . This ratio will generally be larger than one indicating that the proposed approach requires more computations than the variational approach.

1982 263

If the number of iterations N required for the variational approach to converge is fairly large. and the system is sparse (i.e., p is small and k > p / 2 ) , the proposed approach will be competitive. Even when the ratio is large. it may be desirable to use this approach due to its flexibility to be discussed in Section IV-C. Another point that should be noted is that if the subproblem solutions can be stored for future use, the computation of the global optimal solution the second time around would be much simpler, using the proposed algorithm.

The amount of computation involved in the implementation of the algorithm may depend quite strongly on the order in which the subsys- tems are combined. Some orders lead to more efficient solutions than others. Basically, we would like to minimize the size of the tables that have to be communicated from subsystem to subsystem. In other words, we want to choose a sequence of composite subsystems with the smallest number of interaction variables. In some situations. the best choice is easy to see. For example, for the network in Fig. 1 . going “around the ring” (e.g., O,, 02, O,, 0,. 07. O,, 0,) will give a sequence of composite subsys- tems \+itb two interaction variables each. while any other order will involve more interaction variables. For more complicated networks, a good ordering may be less obvious and require some preliminary analysis.

Finally, we should point out that various means may be used in specific applications to further reduce the computational requirements. For exam- ple, if the optimal interaction variables are approximately known. this information can be used to limit the parameterization of the returns 3, to a small range around the approximate values of I:, which will have a strong impact on the m o u n t of computation and storage required. The approximate values ma); be known from past experience. The resource allocation in the office/warehouse example will be carried out on a regular basis, say once a day. After a while, some experience will be built up about the distribution of the optimal interaction variables I,!, which can then be incorporated into the algorithm as described above. It should be mentioned that for the example used in this correspondence which involves monotonic return functions. particularly efficient optimization techniques such as the method of Maximum Marginal Return [I41 are available. These techniques involve computation of a sequence of solu- tions, thus automatically producing the tables for the locally optimal return E,.

Another important aspect of the implementation of the proposed algorithm is the amount of storage it requires. For p interaction variables and q quantization levels. q P numbers need to be stored. Thus, any reduction in either one of these variables along the lines mentioned above will significantly reduce storage requirements. It should be noted that the availability of sufficient storage (either in core memoq or in a secondary storage/retrieval system) is crucial to the flexible operation of the pro- posed algorithm.

B. Decetltrulced Implen~er~tution of the Algorithm

An important feature of the proposed algorithm is that it lends itself naturally to completely decentralized implementation without the require- ment of a central coordinator. The first step of subsystem optimization is performed independently by each subsystem. The second step of combin- ing the partial solutions involves two subsystems at a time. and can also be performed in a completely decentralized manner. Consider, for exam- ple. the following procedure: e subsystem 1 transmits its optimal return g, to subsystem 2, e subsystem 2 combines E, with to produce the composite return

R,,. and transmits the pertinent information (Le., the optimal values of the internal interaction variables, 1: in this case) back to subsystem 1,

subsystem 2 transmits the composite return function R,, to subsys- tem 3, and so on.

Note that this procedure is truly decentralized: each subsystem receives information. does some processing. and transmits the results. When the cj-cle is completed, each subsystem knows the optimal values of its own interaction variables, and can find the optimal allocation I:( r z ) simply by looking up its own optimal return table E, (or recompute the allocations for the now known values of I / . if it is not desirable to store the whole table).

In order to implement such an algorithm in a real situation, many more details need to be worked out. For example, there is the question of which

-

Page 5: A decentralized strategy for resource allocation

264 IEEE TIUNSACTIONS ON AUTOMATIC CONTROL. VOL. AC-27, NO. 1. FEBRUARY 1982

office will initiate the optimization cycle. One possibility is to designate a particular office. the first in the sequence as being responsible for starting the process. This. however. gives that office a special role since if any other office wants to do an optimization cycle. it mill have to transmit a request to the first office. If the designated office is closed down (deleted) for some reason. the cycle cannot start until a new procedure is set and agreed upon, A much more flexible implementation will be one where every office can initiate a complete optimization cycle independently of the others. In our example this can be done easily. The combining of subsystems can start at any office and propagate "around the ring." until it finally reaches back to the originating office. At that point, the cycle is complete. Thus. \vhenevcr an office needs to reoptimize i t can do so without requiring the cooperation of a coordinator. In fact. each office may use a different sequence for doing the subsystem combination. The details for a protocol that accomplishes a similar type of decentralized algorithm can be found in [15]. where the problem is one of routing messages in a computer network.

The latter implementation is preferable since it gives all offices an equal role in the resource allocation process and makes it easier to handle network changes. Some of the features of this type of procedure are discussed next.

C. Fle\-ihi/ir>.

The decentralized structure of the algorithm leads to great flexibility. A particularly attractive feature of the proposed implementation is its easy gmxvth potential. If new offices or warehouses are added to the network. the optimization procedure is easily modified. Only the offices served by the new warehouse need to be aware of its existence. The addition of a new office \vi11 be reflected only by its addition to the sequence which specifies the order in which subsystems are composed. In fact, only two offices need to be aware of the addition of a new office: the one preceding i t in the sequence and the one succeeding it. Thus. additions and changes in the network are handled very easily. and Lvithout requiring a central coordinator. Similar comments appll;. of course. to the deletion of offices and kvarehouses from the network. or to other structural changes such as changing the list of offices sewed by a gk-en warehouse. Note also that the results of subsystem optimization (Le.. R , tables) do not need to be recomputed in the case of a structural change. For examplc. if warehouse number j is deleted (closcd dow-n for maintenance. say). the offices served by it will simply set the interaction variable I,' related to that warehouse to zero. In other words. they delete all the entries to the E, table except those which have I,'=O. Offices which were not sewed by that warehouse are unaffected. If an office gets deleted the algorithm n i l 1 simply skip it in the process of combining the subsystems. This makes i t possible to handle network changes that happen during the optimization process. without necessarily repeating any of the computations.

The ease nith which structural changes can be handled b> the proposed strategy is very important in situations \vhere the optimization has to be repeated often. and where the network keeps changing. In the particular example we presented in this correspondence this flexibilit>- may not be that important. However. in some other applications such as electric power networks. airline scheduling. and dispatch systems. the ability to handle changes quickly and efficientl?. is crucial.

Another aspect of this flexibilitl- is that it increases o\-erall system reliability. If parts of the system (e.g.. some of the communication linksi fail. that does not cause a "catastrophic" system failure. The parts of the system that are functioning can continue to perform the optimization while ignoring the parts that have some problems. as discussed earlier (by setting I,'=O. or skipping inacessible offices).

Alternative decomposition techniques based on model coordination share some of the flexibility provided by this approach. However. the need for a central coordinator is somewhat restrictive. In military applica- tions. for example. it is undesirable to have a single coordinator whose failure (destruction) can disrupt the whole system. Furthermore. the need for communication between this coordinator to all subsystems imposes a particular structure on the communication nctxvork which is undesirable from a reliability standpoint. In our approach. the required communica- tion network is much more evenly distributed and each subsystem needs t O communicate only nith tlvo of its neighbors. In some socioeconomic

systems there may be political reasons which make it preferable not to single out one member of the system and appoint him as a coordinator.

V. COKCLUSIONS

The algorithm described in previous sections was presented in the context of a simple example. The same approach can be applied in more complex situations and in a variety of applications where a finite amount of resources ha7 to be allocated in an optimal way. One example of possible extensions of the approach is the case of dynamic allocations.

Thgoptimization problem formulated in Section I1 was basically static. All thc available resources were allocated in one computation and no time element \vas involved. This corresponds. for example. to a situation where the sales offices collect orders during the day. the warehouses are re- stocked each night. and at the beginning of each day the allocation is perfomled based on the total orders collected the previous day. Thus. it needs to be performed only once per day.

A more difficult problem arises when the allocation needs to be done dynamically as the available amount of resources and the demand changes. For example. each time a nen- order comes in. we may- want to look at the total number of product units available and make a new allocation. Of course. we must somehoa. take into account the fact that more orders nil1 be forthcoming. and therefore, that part of the stock should be reserved for future use. Under certain assumptions it can be shown that a straight- fonvard extension of the proposed approach is possible to this simple dynamic situation. In [I61 we discuss such an extension. assuming k n o d - edge of the statistics of future demands. based on the demand so far (Le.. the conditional probability distribution of demands). Such knowledge may be obtained by running the static allocation algorithm during a "training" period. and anal>zing the resulting data.

Another extension of the proposed approach. Lvtuch is still under investigation. is to the stochastic case. In many situations there are various uncertainties in the problem: either the interaction variables or the amount of a\-ailable resources may be knoxvn only approximately (e.g.. the amount of resemes in a given oil or gas field). The optimal solution in this case involves propagating the distribution functions of the variables themselves. Lvhich makes the optimal solution ve? complex. HoH-ever. various suboptimal solutions seem to be possible.

Finall!. we should note that many details of the proposed algorithms Xverc' left unspecified. We attempted. however. to present the basic ideas and illustrate how they can be applied to derive a truly decentralized strategy. which does not require a coordinator.

ACKNOWLEDGMENT

The author gratefully acknowledges the suggestions of the anonymous reviewers which helped to clarify several of the issues presented in the manuscript.

REFERENCES

Page 6: A decentralized strategy for resource allocation

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. AC-27, NO. FEBRUARY 1982 265

I. G. Lee. W. G. Vogt. and M. H. hlickle, "Optimal decomposition of large-scale networks," IEEE Trum. Swf.. Mun, C v k r n . . vol. SMC-9. pp 369-375. July 1979. R. Bellman. R. Kalaba. and bl. C. Presuud. Imwrranr Imheddrrtg und Rudrulire Transfer u t Shhs of Frnire Thickrress. New York: Elsevier. 1963. R Bellman. Adupme Conrrol Processex A Curded Tour. Princeton. NJ: Princeton Univ. Press. 1961. R. E. Larson. Srure Increntertr DI.numic Progruntmbtg. New York: Elsebier. 1968. P. M. Merlin and A. %gall. "A failsafe distributed routing protocol." lo be pub- lished Systems Control. Inc.. Decentralized Contr.. Midterm Progress Rep., Contract No. DA5660-78-C-0069, Nov. 15. 1978. A. M . Geoffrion. "Primal resource-directive approaches for optimizing nonlinear decomposable systems,'' Oper. Res.. May-June 1970. A. Findeisen. "Parametric optimization by primal methods in multilevel systems," IEEE T r u m Srsr. Sci. Crhen!.. vol. SSC-4. 1968.

interconnected systems." in Proc. JAAC. Troy. NY. 1965 C. Brosilow. L. S Lasdon. and D. Pearson, "Feasible optimization methods for

1. D. Schoeffler. "Statlc multilevel systems." in Oprrmimrron .Melhrho&for IArge-Sruk Srxrem w f h Appli~morrs. D A. Wismer. Ed. New York: McGraw-HiU. 1971. L. S. Ladon . Oprmi?urron Theon for Large Swenls. Ne% York: Macmillan. 1970. T. Groves and M . Loeb. "Incentives and puhlic input<." J Pvhltc E c m . . 1.01. 4, pp.

Y. C Ho. L. Seni, and R. Sun. "A classof center-free resource allocation algorithms." 21 1-226, Aug 1975.

Olrge Scule SFXI.. vol. I . pp. 51-62, Feb. 1980. R. G Gallager. "A minimum delay routing algorithm using distributed computation." IEEE Truns. Conmrurt.. vol. COM-25, pp 73-X5. Jan. 1977.

Decentralized Stabilization of Nonlinear Interconnected Systems Using High-Gain Feedback

HASSAN KHALIL AND ALI SABER1

Abstract-The use of local high-gain feedback to stabilize a class of nonlinear interconnected systems is studied. Classes of systems that can be stabilized, for any arbitrary interconnections, using only local high-gain feedback are identified.

INTRODUCTION

Decentralized stabilization of interconnected systems has received con- siderable attention in the control literature. Most of the work treated the case of linear subsystems. One of the fea, attempts to consider nonlinear subsystems was canied out by Davison [I]. In that paper Davison considered a class of nonlinear interconnected systems and showed the existence of local high-gain feedback control that would stabilize the system for all possible interconnections. Davison's example suggests that there is a role to be played by high-gain feedback in understanding the decentralized stabilization problem. The early work [2], [ 3 ] as well as the recent work [4], [6] on hgh-gain feedback control suggests that the investigation of large scale system structures which permit local high-gain feedback control will deepen the intuition of control engineers.

In this paper we consider a class of nonlinear interconnected systems that includes Davison's example as a special case. We study the effect of using local high-gain feedback on the stability of the system. It is sh0v.m that \vhen local hgh-gain feedback is employed. one may consider a reduced-order interconnected system whose decentralized stabilizability would imply that the original system can be stabilized using local hgh-gain feedback. Our result is proved in a global sense, when the equilibrium point is exponentially stable in the large, and in a local sense. when the region of attraction is finite. The result is then used to identify classes of interconnected sptems that can be stabilized using decentralized control for any arbitrary interconnections.

GLOBAL RESULT

Consider the nonlinear interconnected system N

i - , = ~ ( T . x , ) + 2 g i j ( T . x j ) + B i u , , i = l , - . . . K ' 0 ) j = l J t l

Manuscript received September 2, 19x0: revised June 17. 1981. This work was supported

Thc authors are with the Department of Electrical h g n e e r i n g and Systems Science. by the National Science Foundation under Grant ENG-7912152.

Michigan State Umversit), East Lansing, MI 48824.

where x , E R"1, u i E R"'1, B, is a constant matrix of full rank,fi and gzJ are continuously differentiable.f;(~,O)=O and g,,(T.O)=O for all T > O , and satisfy

SUP SUP I l V . ~ , f , ( T , X , ) ! ! = h , < a ; , 7 2 0 I,€,%'"'

SUP SUP :iV g , j ( T , X j ) ! l = h 2 < X 0 . ?a0 x,€ R"I

rl ( 2 )

The stabilizability via decentralized control is defined in the spirit of the stabilizability definition of [7].

Dqfinition I : The system (1) is said to be globally exponentially stabi- lizable \<a decentralized feedback control if there are functions h , : R , X R " t - R m , such that

i) I t , is continuously differentiable. h , (~ .0 )=0 ,

SUP SUP I I V x , h , ( T . X , ) ; I = h ~ < c c . r X l x , € R",

and

ii) x = O is the unique equilibrium point of

a , = A ( T , X , ) + 2 g , , ( T . X , ) + B , h , ( T . . Y , ) (4) j t i

and is globally exponentially stable.

mation To prepare system (1) for high-gain feedback we use the state transfor-

where is a constant nonsingular matrix obtained by row transforma- tions on Bi such that

and G, is a nonsingular m , X rn , matrix. With (5). system (1) is trans- formed into

j ; = @ , ( T . J , . Z , ) - 2 $ , l ( T . J f , Z f ) (7) J f r

~ , = T , ( T . I ' , . z , ) + 2 ~ , l ( ~ , ~ J . z J ) + G , ~ ~ , (8) / + I

where +,. r,. I$,,. and Or, are obtained fromf; and g,, in an obvious way. Moreover. 9, is continuously differentiable. +,(~.0.0)=0 for all T 2 0 , and

SUP SUP I V,;9 , (7 .1; .2 , )11=h3<X T 2 O ~ b , E R r ' . . - , € R " ' ~

SUP SUP . 8 , , ~ , ( T , ? ' , , z , ) ' l = X < x (9) T ~ O J , € R ' * . = , E R ~ ,

where r, = 17, - m , . The functions T,, J.,,, and 8,) have properties similar to that of 9,. It can be easily verified that system ( 1 ) is stabilizable via decentralized control if and only if system (7). (8) holds the same property. From now on. without loss of generality. we study the stabiliza- bility of (7). (8). Let us define an auxiliary "reduced-order" system as

j i =@, (T , . ) ' , .C i ) - 2 # , J ( T . J > , C J ) . ( 10) , i i

This system is obtained from (7) by treating z , as an input vector. The auxiliary system is called "reduced-order'' because its dimension is

,\r

r 2 2 ( n , - m,) which is a reduction of the original , = I

001 8-9286/82/0Z00-0265$oO.75 'C.1982 IEEE