8
Automatic Generation of Parallel Compiler -Partial Evaluation of Parallel Lambda Language Yongqiang Sun Kai Lin Yijia Chen Department of Computer Science Abstract We descrabe an thas pape parallel programmzng language. The parallel language we present as a combinafion of lambda calculus and message passang provang some tee culus and message passang in our partial evaluator for the parallel language. 1 Introduction Partial evaluation is a program transforming tech- nique that specializes programs. Given a source pro- gram and its incomplete input data, partial evaluation produces a residual program. When applied to its re- maining input data, the residual program yields the same result that the source program would when given all its input data. During 1970s, it was realized, in- dependently by Futamura [3], "urchin [9] and Ershov [a], that a self-applicable partial evaluator can possibly generate a compiler for a language, when given as input the operational definition in the form of an interpreter. Many partial evaluators have been implemented, ex- amples are Mix [B], Lambda-Mix [5], etc. Usually there are two main phases of partial evaluation, binding time analysis and specialization. Binding time anal- ysis (BTA) works in advance annotate the source program those parts that can evaluated in partial evaluation time. Then the specializer blindly obeys the annotation to complete the computable parts and gen- erate the residual program. To our knowledge, all the existing partial evaluators were designed for sequential languages, and particu- guages based on lambda calculus. Lambda very simple, yet of strong expressive power, which makes it both easy for partial evaluation and powerful to describe the denotational semantics of okher complex sequential languages. But to treat par- sage passing communication mechanism. In standard lambda calculus, the evaluation order is arbitrary, and no ambiguity will arise due to the well-known Church- Rosser property. This is very important for 'partial evaluation, for we can choose any reduction strategy to perform as much computation as possible. But in message passing, the evaluation order is always fixed, and any of its changes will possibly leads to the change of the semantics. Meanwhile there exists indetermi- nacy in message passing parallelism, so if we do want to keep the semantics strictly unchanged, some com- putations should be forbidden in the partial evaluation time, though all of the needed external information is provided. So to develop the partial evaluator for our parallel language, we should improve upon the tech- niques which are originally used for partial evaluation of sequential language, and introduce some new meth- ods to solve the problem caused by semantic difference between sequential and parallel computations. The reader is supposed to be familiar with some ba- sic idea of the partial evaluation, referred to [4], [I]. The paper is organized as follows: Section 2 describes the parallel language we proposed, both its syntax and operational semantics. To represent the result of the BTA, the concept of two-level meta-language is applied to our language, It's syntax and semantics are also dis- 390 0-8186-7876-3/96 $10.00 0 1997 IEEE

[IEEE Comput. Soc. Press Proceedings. Advances in Parallel and Distributed Computing - Shanghai, China (19-21 March 1997)] Proceedings. Advances in Parallel and Distributed Computing

Embed Size (px)

Citation preview

Page 1: [IEEE Comput. Soc. Press Proceedings. Advances in Parallel and Distributed Computing - Shanghai, China (19-21 March 1997)] Proceedings. Advances in Parallel and Distributed Computing

Automatic Generation of Parallel Compiler -Partial Evaluation of Parallel Lambda Language

Yongqiang Sun Kai Lin Yijia Chen Department of Computer Science

Abstract

We descrabe an thas pape parallel programmzng language. The parallel language w e present as a combina f ion of lambda calculus and message passang provang s o m e t e e

culus and message passang in o u r partial evaluator f o r t h e parallel language.

1 Introduction

Partial evaluation is a program transforming tech- nique that specializes programs. Given a source pro- gram and its incomplete input data, partial evaluation produces a residual program. When applied to its re- maining input data, the residual program yields the same result that the source program would when given all its input data. During 1970s, it was realized, in- dependently by Futamura [3], "urchin [9] and Ershov [a] , that a self-applicable partial evaluator can possibly generate a compiler for a language, when given as input the operational definition in the form of an interpreter.

Many partial evaluators have been implemented, ex- amples are Mix [B], Lambda-Mix [5], etc. Usually there are two main phases of partial evaluation, binding time analysis and specialization. Binding time anal- ysis (BTA) works in advance annotate the source program those parts that can evaluated in partial evaluation time. Then the specializer blindly obeys the annotation to complete the computable parts and gen- erate the residual program.

To our knowledge, all the existing partial evaluators

were designed for sequential languages, and particu- guages based on lambda calculus. Lambda

very simple, yet of strong expressive power, which makes it both easy for partial evaluation and powerful to describe the denotational semantics of okher complex sequential languages. But to treat par-

sage passing communication mechanism. In standard lambda calculus, the evaluation order is arbitrary, and no ambiguity will arise due to the well-known Church- Rosser property. This is very important for 'partial evaluation, for we can choose any reduction strategy to perform as much computation as possible. But in message passing, the evaluation order is always fixed, and any of its changes will possibly leads to the change of the semantics. Meanwhile there exists i nde te rmi - nacy in message passing parallelism, so if we do want to keep the semantics strictly unchanged, some com- putations should be forbidden in the partial evaluation time, though all of the needed external information is provided. So to develop the partial evaluator for our parallel language, we should improve upon the tech- niques which are originally used for partial evaluation of sequential language, and introduce some new meth- ods to solve the problem caused by semantic difference between sequential and parallel computations.

The reader is supposed to be familiar with some ba- sic idea of the partial evaluation, referred to [4], [I]. The paper is organized as follows: Section 2 describes the parallel language we proposed, both its syntax and operational semantics. To represent the result of the BTA, the concept of two-level meta-language is applied to our language, It's syntax and semantics are also dis-

390 0-8186-7876-3/96 $10.00 0 1997 IEEE

Page 2: [IEEE Comput. Soc. Press Proceedings. Advances in Parallel and Distributed Computing - Shanghai, China (19-21 March 1997)] Proceedings. Advances in Parallel and Distributed Computing

cussed. In section 3, we explain the channel analysis which determines the relation of the channel operations of a program. Section 4 discusses the BTA of our lan- guage. Finally, section 5 summarizes what we have achieved and points out some open problems.

2 Parallel Lambda Language

2.1 The Parallel Lambda Language

The abstract syntax of the parallel lambda language program is given below:

prog ::= f u n d e f . I fundef ,prog f unde f ::= P(vaT*) = e z p e z p ::= const c I var I X vur exp I expQezp I if e z p e z p e z p I let W U T eap e z p I P ( e z p * ) I O(exp*) I getch I ?vur I letch var!eap e z p I vaT!ezplle3:p

There are two kinds of the expressions: sequential ex- pressions and parallel expressions. The sequential ex- pressions are indeed enriched lambda calculus, with if- then-else(if), local expression definition(let), built-in operation O(E*) and user defined function call P(E*). The parallel expressions are of the following types: (i)getcli-channel creation, which creates a new chan- nel to constitute the interface between processes. (ii)?x-receive operation, which receives a value from the channel x. (iii)letcli x!El E2-sequential channel operation. Only when the a send operation on channel x with the value of E1 has taken place, can E2 be evaluated. (iv)x!E1I1E2-parallel operation. A send operation on channel x with the value of E1 and the evaluation of E2 can take place simultaneously.

Next we give the operational semantics of the paral- lel lambda language. There are two kinds of computa- tions in the parallel lambda language, one is of lambda calculus and the other is of message passing communi- cation. We know the lambda calculus regards compu- tation as /?-reduction. Though we can find some expla- nation of /?-reduction by the means of communication [8], but to our point of view, no existing explanation has yet captured the full essence of /?-reduction. But to give a consistent and concise description of seman- tics, we treat the &reduction the same as inter-process communication, which is notated by T like CCS [7].

Definition 2.1 Syntactic Value

~ u l = CUTU{A 3: e z p )

C is the constant set , and T is channel set with COT = 0.

Definition 2.2 Communication Label

CO" = { k(v), qq I w = c, c E c, k € T)

k ( v ) stands f o r a receive - operation inputting value v on channel k, while k(v) denotes - the send operation outputting value v on k . k(v) and k ( v ) are called com- plementary labels (we notate the complementary label of 1 by 1'). Note we only permit first order non-channel value t o be sent on channel.

Definition 2.3 Label

L = C o m m U { r )

Definition 2.4 Expression Configuration

Eton = {(K, E ) I K c T, E = e z p )

Thus the labelled transition system defining the seman- tics of our language is a triple (Econ, L, -), where -E Econ x L x Econ Application(we adopt the lazy call-by-value strategy.)

K, E1 -!-+ K', E; K, E1@E2 A K', E;@Ez

K, Ez -L K', E; K, v @ E ~ K', v Q E ~

K, (A 3: E)@v A K , E[v /z]

If K. El 2 K'. E:

K,if E1 E2 E3 A K',if E: E2 E3

K, if 1 E1 E2 A K, E1

K , if 0 E1 E2 A K, E2 Let

1 K , E1 ---t K', E: K,let 1: E1 E2 -!+ K',let z E: E2

K,let x v E A K, E [ v / z ] Function Call

K,Ei -!+ K',E! K,P(G E, ... E,) -I, K',P(J E ; . . . E,)

where Iv'l = i - 1.

K, P ( J ) 2 K, E[;/Z]

where 1;1 = IZI and P(Z) 'Af E. Channel Creation

K,getcb 2 K U { k ) , k

391

Page 3: [IEEE Comput. Soc. Press Proceedings. Advances in Parallel and Distributed Computing - Shanghai, China (19-21 March 1997)] Proceedings. Advances in Parallel and Distributed Computing

where k K . Receive

k(c) K,?k --+ K,c Sequential Channel Operation

K , E1 4, K', E; K , letch k!E1 E2 K' , letch k ! E : EZ

- K,letch k!c E 9 K , E

Parallel Chariiiel Operation

K , E1 A K', E: K,Ic!RI/Ez K',k!E;IIEz

- K , k ! c j J E k(c! K, E

K, E1 K , E: K , EZ --+ K, E; K,k!EillE2 -L K,k!EI((Ei

I'

K , E bo K, E' K , k!cl(E K , E'

In the above rules, c E C, k E T , and U , v' E V a l . Besides the transition rules, we have the extra re-

striction that the channel must not be passed between functions. It follows that the communication can only take place inside one function.

2.2 The Two-level Parallel Lambda Lan- guage

The result of applying the BTA to an expression is an annotated expression where the parts that are not to be evaluated in partial evaluation time are marked. It naturally leads to the notion of the two-level par- allel lambda language. In the two-level Language, we distinguish the partial-evaluation-time expressions and the run-time expressions by introducing two sorts of operators: partial-evaluation-time operators and run- time operators, i.e. two versions of each operator in the ordinary parallel lambda language. The abstract syn- tax of the two-level parallel lambda language is given below:

tprog ::= t f u n d e f . I t f u n d e f , t p r o g t f a n d e f ::= P ( v a r * ) = t e x p t e x p ::= corrst c I consts 1 v a r 1 Xr v a r t e x p I X r v a r t e x p I t e x p O t e z p I t e x p @ r t e z p

I if t e x p t e x p t e x p I i f r t e x p t e x p t e x p

I let v a r t e x p t e x p I l e tr m r t e x p t e x p

1 P ( t e x p * ) I P - r ( t e x p * ) I O ( t e x p * ) I O-r ( t e zp*) I getch I getchr I ?var I ?r v a r I letch v a r ! t e z p t e x p I letchr v a r ! r t e x p t e x p

I var ! t e zp l ( t exp I v a r ! r texpll-r t e x p I lift t e x p

The meaning of the 'normal' operators is the same as what it is in the ordinary parallel lambda language, while the residual operators represent the run-time computations and yield a piece o f code as result. The lift operator is introduced to lift the value of partial- evaluate-time computable expression into a const ex- pression.

The operational semantics of the two-level Parallel Lambda language is defined through followings:

Definition 2.5 Two-level Syntactic Value

T v a l = C U T U{ X 1: t e z p } U{ e x p }

Definition 2.6 Two-level Communication Label

Tcomm = { k ( C > , W I c E c, k E T }

Definition 2.7 Two-level Label

TI = Tcomm U{.} Definition 2.8 Two-level Expression Configuration

T e c o n = { (K, E ) I K c T, E = t e x p }

Then a certain (Tecon, TI, -) gives the semantics, where -2 Tecon x TI x Tecon. Since the seman- tics of the non-residual expressions is the same as that of the one-level expressions, their transition rules are omitted. For the residual expressions, we only give some examples, the rests are similar. Residual Constant

K,constr c 5 K , build-const(c)

Residual Lambda

K, X r z v where m a r = newvar(). Residual Application

K, b u i l d - X ( n v a r , v [ n v a r / z ] )

K,R --& K',E: K , E l O r E2 --!+ K', E i @ r E2

K , E -1, K:, E' K, v @ r E K' , v O r E'

K,vl@r v2 2 K,build-@(vl,v2)

392

Page 4: [IEEE Comput. Soc. Press Proceedings. Advances in Parallel and Distributed Computing - Shanghai, China (19-21 March 1997)] Proceedings. Advances in Parallel and Distributed Computing

Residual Parallel Channel Operation

K, El -!+ K’, E:

where ch is k or x .

where ch is k or x .

where ch is t or x .

K, k ! r V I I1-r oz 2 K, buildAl(chvar(k), V I , o2)

Lift K,E A K’,E’

K , lift E 2 h?, IiftE’

K,lift c 4 K,build_const(c)

K , lift k -L K, clivar(k)

K, lift v A K, v In the above rules, c E C, k E TI U , V I , vz E Tval and z is some v a r expression. Note newvar() is a function with side-effect, which produces new variable names continuously during the evaluation to avoid the name clashes. Meanwhile chvar is a one-one mapping of T to a set of variable names which is disjoint from the range of newvar(). In our language, we do not have global channel, so we can not lift a dynamic-created channel to be a const expression. That is why we need the chvar mapping. However such lifting will cause a little problem that the result of the evaluation will contain some undefined variables. To fix this, we need a post- process to add some subexpression-let v getch. . . to the result expression in some proper positions. For every residual operators, we have a buildspr function to produce a piece of code with the non-residual opr to appear in the residual program.

3 Channel Analysis

The binding time of a channel operation does not only depend on itself, but also the corresponding channel operation. For example in the expression- z!const 311?y, if z and y can be bound to the same channel, but not both known in the partial evaluation time, then we should suspend this evaluation until run time. So we should know the possible relation between

every send and receive operation. And this is the pur- pose of the channel analysis. To start channel analysis, we must know which channel can be used in every chan- nel operation at first. It leads to the channel variable analysis, which also depends on a closure analysis [l], for our language is higher order, So the whole channel analysis can be divided into three phases: closure anal- ysis, channel variable analysis and channel analysis.

The purpose of the closure analysis is for every vari- able and every expression to collect the set of possible lambda abstractions that the variable may be bound to/the expression may evaluate to. It yields the fol- lowing mappings:

pel : ExpLabel d P(ExpLabe1) pCl : VarLabel 4 P(ExpLabe1)

The abstract closure is identified by the label of it’s body expression. Note we distinguish every expression and every variable by labels (in the sequel we will abuse e z p instead of ezplabel, for example ,ucl(ezp) instead of p,r(ezplabel), if no confusion arises).

Channel variable analysis is for every variable and every expression to collect the set of possible channels that the variable may be bound to/the expression may evaluate to, i.e the following two mappings:

hcu : ExpLabel -+ P(ExpLab81) pcu : VaTLabel - P(ExpLabe1)

Every possible channel is identified by its generating expression, i.e. the corresponding getch expression.

To describe the communication topology of the given program, we introduce a sort of directed acyclic graphs(DAG), named Communication Diagram. And then we define a reduction relation on communication diagrams to imitate the real execution of the programs.

Definition 3.1 A Communication Diagram is a DAG (G, -), of which nodes are of the following types:

(i)non-channel-operation 0 @)choice

(i)parallel w w

(iv) communica tion @ @ Definition 3.2 A node of a communication diagram is visible iff t t has no father node.

Communication diagram is to some extent abstraction of program execution: Every node stands for a certain operation, meanwhile the directed edge represent the execution order of the operations. When the node is visible, the corresponding operation can take place im- mediately.

393

Page 5: [IEEE Comput. Soc. Press Proceedings. Advances in Parallel and Distributed Computing - Shanghai, China (19-21 March 1997)] Proceedings. Advances in Parallel and Distributed Computing
Page 6: [IEEE Comput. Soc. Press Proceedings. Advances in Parallel and Distributed Computing - Shanghai, China (19-21 March 1997)] Proceedings. Advances in Parallel and Distributed Computing

In other words, at any moment of every possible exe- cution of E , there is no more than one communication subexpression that can communicate with i t .

We claim that only the deterministic communication subexpressions can possibly be evaluated during partial evaluation to preserve the indeterminacy of the original expression.

The channel analysis at first establishes the commu- nication diagram of the given program, then by trac- ing all of its reduction paths, it computes the following mapping of communication expressions to the set of ev- ery possible corresponding communication expressions, provided that the expression is deterministic. When discovering an expression is not deterministic, its value is set to be I:

pCp : ExpLabel -). P(EzpLabe1) u{I} According to our rules, the communication diagram of any expression is finite, so its possible reduction paths are finite. And they always terminate either in an empty graph or a graph having communication nodes but without any communication pairs.

4 Binding Time Analysis

The aim of binding time analysis is to distinguish which computations can be performed at partial evalu- ation time and which must be suspended. After given an initial type environment of the parameters, the BTA annotate a program of parallel lambda language into a two two-level one by marking as residual those parts which can not be computed during partial evaluation time. Our analysis is based on the type inference [4): first we devise a type system of the two-level parallel lambda language. A program which is well typed ac- cording to the type system is called well-formed. Only the well-formed program is valid. Thus the aim of BTA is to annotate the program to be well-formed by replac- ing some occurrences of @, !, by the corresponding @r, !r, . . . and inserting some lift operators. Fur- thermore, the BTA should ensure that the partial eval- uation can perform as many computations as possible, and will never change the semantics of the program.

Definition 4.1 The abstract syntax of two-level type i s given by:

type ::= (c tr l , basictype) c t d ::= sc I dc basictype ::= base I code I funtype code ::= hocode I chcode f u n t y p e ::= (c tr l , base) -+ type I ( c t d , liocode) + type I ( c t d , f untype) 4 type

395

A type is a pair, which consists of a control condi- tion and a basic type. Control condition shows if an expression is a subexpression of an If expression which will not be resolved until run time, i.e static control(sc) or dynamic control(dc). In the partial evaluation of sequential functional language, for the expression if El E2 E3, if E1 is unknown in par- tial evaluation time, Both El and E2 will be eval- uated. When we neglect the problem of termina- tion, this strategy is feasible, because the sequential functional language has no side-effect, which is actu- ally ensured by the Church-Rosser property. But in parallel lambda language, because the channel oper- ations strongly depend on the evaluation order, this method is no more valid. Consider the expression- ch!const 311if a== coust 3 ?eh ?ch+const 3 , if a is unknown, evaluating both ?ch and ?ch+const 3 will lead to the deadlack in the second ?ch, be- cause ch!coust 3 only take place once. It may re- mind us to introduce some ‘rewind’ function, that is to say, after the computation of the first branch of if, we eliminate all the communication involved in its computation, and then go on with computa- tion of the second branch. Unfortunately this method is not only great time and space consuming, but also causes new problems. Consider the expression- ch!const 311if a== const 3?ch const 5, it is actually a partial function: when a is equal to 3, it returns 3, oth- erwise it will fall into deadlock. If we use the ‘rewind’ method when a is unknown in partial evaluation time, the evaluation will be deadlocked in the second branch of the if expression. Thus no residual program will be produced. So if the control condition of a communica- tion expression is dynamic(de), the evaluation should be suspended. Basic type is used to distinguish the following cases: (i) base-the constant that can be evaluated in partial evaluation; (ii) code-t he partial-evaluation- time uncompu t able ex- pression. (iii)functype-the higher order value that can be evalu- ated in partial evaluation. For the basic type code, we can further distinguish be- tween computations with residual channel operations and those without, and denote them by chcode and hocode respectively. An expression with basic type of t l + t2 lives at partial evaluation time and must not be applied to an expression with type ( c , chcode), otherwise it will risk changing the sequence of commu- nication. Thus t l can not be a type of (c, chcode).

Definition 4.2 W e define the following partial order on the type system: (CI, btl ) C ( C Z , bt2) iff c1 C c2 and btl C btz

Page 7: [IEEE Comput. Soc. Press Proceedings. Advances in Parallel and Distributed Computing - Shanghai, China (19-21 March 1997)] Proceedings. Advances in Parallel and Distributed Computing

thew types, then E is well-formed zf pt I- E : t can be deduced from the znference rules given below for some t y p e t:

Var i

Parallel

Pt E1 : (sc, base) pt I- E2 : (sc, b t ) P t I- El((E2 : ( S C , b t )

Residual Constant

pt t- constr c : (c, hocode)

Residual Lambda

pt[z H (dc, at,)] t- E : (dc, b t z ) pt t- A-T z E : (c, b t l )

where b t l _> bta _> hocode. Residual Application

where b t l 2 bt2 2 hocode. Residual If

Pt E1 : ( c , b t l ) pt t- E2 : (c, b t z ) p t I- E3 : ( c , b t 3 ) P t k i f r El E2 E3 : btl U bt2 U bt3

where bt2 # chcode. If

where btl, bt2 = hocodeorchc Let

Send

where pGp(ch!E) = { E 1 , - . . , E n } and Receive

p t ( c h ) = t p t t E 1 , - . , E n d pt t- !ch : t

Letch

Pt I- Ei : (sc, base) pt t- E2 : (sc, bt) pt t- letch El Ez : (sc, bt)

where b t l , b t 2 , bt3 _> hocode. Residual Let

where b t l , bt2 2 hocode. Residual Send

p t (ch ) = (c, btl) pt t- E : ( c , b t z ) pt t- c h ! r E : (c,chcode)

where bt1 = base or hocode and bt2 2 hocode. Residual Receive

p t ( c h ) = (c , b t ) pt t- ?r c h : (c, chcode)

e bt = base or hocode. dual Channel Creation

pt t- getchr : (c, hocode)

Residual Letch

p t k- K : (c, chcode) p t I- E2 : (c, b t ) pt I- letchr E1 E2 : (c,chcode)

where bt 2 hocode. Residual Parallel

pt E1 : (c, chcode) pt t- E2 : ( c , b t ) pt t- E1Il-r E2 : (c,chcode)

where bt 2 hocode. Lift

p t t- E : ( c , base) pt b lift E : ( c , hocode)

396

Page 8: [IEEE Comput. Soc. Press Proceedings. Advances in Parallel and Distributed Computing - Shanghai, China (19-21 March 1997)] Proceedings. Advances in Parallel and Distributed Computing

pt I- E : (c, hocode) pt I- lift E : (c, hocode)

pt I- E : (c ,chcode) pt I- lift E : (c, chcode)

From the type inference rule, we can see that: (i)if an expression is applied to an expression of basic type chocode, the apply operation must be forbidden in partial evaluation time. Similarly, in let 2 El Ez, if El has the basic type chcode, the evaluation must be suspended. @)in the residual lambda expression, the control con- dition of the type of its body expression is forced to be dc, whether it is in some unresolved if expression or not, because we do not have the following rule:

E - E' -xE=IE+ rule

which is valid in sequential language. So we set the control condition of the residual lambda expression to be dc in order to prevent any evaluation of its channel operations.

5 Conclusion and Open Problems

We have developed a partial evaluator for the par- allel lambda language, and to our knowledge, it is the first one of this kind. We have improved the BTA to suit the need of the parallel language, and the tech- nique of channel analysis is introduced to analyze the communication topology of programs. The specializer of our partial evaluator now is written by C. Imple- mentation of a self-applicable one, i.e. written by the parallel lambda language itself, is undertaken.

Several problems remain open. First, we have the restriction that the communication can only take place inside one function, which limits the expressive power of our language severely. But if we try to get rid of this constraint, some problems will occur: because of com- munication can happen between functions, we have to introduce the fundion-call node in the communication diagram. This will cause the graph reduction untermi- nating. Meanwhile for the sake of inter-function com- municaton. the functions of the same static parameters may not of the same semantics. Hence in the special- ization, yielding one version of the function with the same static parameters may not be safe. But then how can we avoid unterminating specialization?

Second, only the first order non-channel value can be transferred through channel in our language. If both the higher order value and channel are permitted to be sent on channel, then it will involve some iteration among the closure analysis, the channel variable anal- ysis and the channel analysis.

Acknowledgements This work is supported in part by 863 Project(863-306).

The authors would like to thank Hairong Kuang and Zhe Yang, who have devoted a lot of efforts to this research at an earlier stage and explored many new ideas which has proved to be very important now. Special thanks also go to Dr Yuxi Fu for reminding the authors of the essential theoretical aspect of the work.

References

[l] A. Bondorf, 1990. Automatic autoprojection of higher order recursive equations. Science of Computer Pro- gramming. 17(1-3),3-34, Revision of Paper in ESOP '90, LNCS 432, May 1990.

[2] A. P. Ershov, 1978. On the essence of compilation. In E. J. Neuhold. Formal Description of Programming Cdncept, 31-420, 1978, North-Holland.

[3] Y. Futurama, 1971. Partial Evaluation of Computation Process-An Approach to a Compiler-compiler. Sys- tems, Computers, Controls, 2(5 ) , 45-50, 1971.

[4] C. K. Gormard and N. D. Jones, 1991. A Partial Eval- uator for the Untyped Lambda Calculus. Journal of Functional Programming, 1(1), 21-69, January 1991.

[SI N. D. Jones C. K. Gormard and A. Bondorf, 1990. A self-applicable partial evaluator for the lambda calcu- lus. IEEE computer society, 1990 International Con- ference on Computer Languages, 1990.

[6] N. D. Jones P. Sestoft and H. Sondergaard, 1985. An Experiment in Partial Evaluation: the Generation of a Compiler Generator. In J. P. Jouannaud. Rewrit- ing Techniques and Applications, Dijon, France, 1985,

. LNCS ,202, 2 4 4 0 , Springer-Verlag. [7] R. Milner, 1989. Communication and Concurrency.

Series in Computer Science, Prentice-Hall. [8] R. Milner, 1992. Functions as Processes, Journal of

Mathematical Structures in Computer Science, 2, 119- 141, Cambridge University Press.

[9] V. F. Turchin, 1980. The Use of Metasystem Tran- sition in Theorem Proving and program optimizing. In J. De Bakker and J. van Leeuven. Automata, Languages and Programming. 7th ICALP, Noordwi- jkerhout, the Netherlands, 1980. LNCS 85, 645-657, Springer-Verlag .

.

397