105
1 Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution Bent Thomsen Department of Computer Science Aalborg University nowledgement to John Mitchell whose slides this lecture is based on.

Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

  • Upload
    ezhno

  • View
    35

  • Download
    0

Embed Size (px)

DESCRIPTION

Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution. Bent Thomsen Department of Computer Science Aalborg University. With acknowledgement to John Mitchell whose slides this lecture is based on. Concurrency, distributed computing, the Internet. - PowerPoint PPT Presentation

Citation preview

Page 1: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

1

Languages and Compilers(SProg og Oversættere)

Lecture 14

Concurrency and distribution

Bent Thomsen

Department of Computer Science

Aalborg University

With acknowledgement to John Mitchell whose slides this lecture is based on.

Page 2: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

2

Concurrency, distributed computing, the Internet

• Traditional view:• Let the OS deal with this• => It is not a programming language issue!• End of Lecture

• Wait-a-minute …• Maybe “the traditional view” is getting out of date?

Page 3: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

3

Languages with concurrency constructs

Maybe the “traditional view” was always out of date?• Simula• Modula3• Occam• Concurrent Pascal• ADA• Linda• CML• Facile• Jo-Caml• Java• C#• Fortress• …

Page 4: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

4

Categories of Concurrency:

1. Physical concurrency - Multiple independent processors • Uni-processor with I/O channels

(multi-programming)• Multiple CPU

(parallel programming)• Network of uni- or multi- CPU machines

(distributed programming)

2. Logical concurrency - The appearance of physical concurrency is presented by time-sharing one processor (software can be designed as if there were multiple threads of control)

• Concurrency as a programming abstraction

Def: A thread of control in a program is the sequence of program points reached as control flows through the program

Page 5: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

5

Introduction

Reasons to Study Concurrency1. It involves a different way of designing software that can be

very useful—many real-world situations involve concurrency– Control programs– Simulations– Client/Servers– Mobile computing– Games

2. Computers capable of physical concurrency are now widely used

– High-end servers– Grid computing– Game consoles– Dual Core CPUs, Quad Core … 32 Core in 3 years

Page 6: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

6

Compilers and More--What to Do With All Those Cores?HPC Wire (04/06/07) Vol. 16, No. 14, Wolfe, Michael

One of the PPoPP attendees, Prof. Rudolf Eigenmann (Purdue Univ.) issued an indictment, saying that we in the parallel programming research community should be ashamed of ourselves. Single-processor systems have run out of steam, something the parallel programming community has been predicting since I was a college student. Now is the time to step up and reap the benefits of all our past work. We've had 30 years to study this problem and come up with a solution, but what's the end result? Surprise! We still have no well-accepted method to generate parallel applications.

Dr. Andrew Chien (Intel), one of the PPoPP keynote speakers, took issue with Eigenmann's criticism. Chien said that in fact we've had a great deal of success in parallel programming: just look at all the massively parallel systems and the applications that run on them. However, halfway through his talk was the slide "Wanted: Breakthrough Innovations in Parallel Programming." I asked how he could claim past success, then state that breakthrough innovations are needed; it sounded like a typical manager: "good job, now get back to work." He replied that in the past, parallel programming meant high performance. Now, parallel programming means spreadsheets, games, email, and applications on your laptop. It's a different target environment, with a different class of programmer, and different expectations.

Compilers and More -- What To Do With All Those Cores?

Page 7: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

7

The promise of concurrency

• Speed– If a task takes time t on one processor, shouldn’t it take time

t/n on n processors?

• Availability– If one processor is busy, another may be ready to help

• Distribution– Processors in different locations can collaborate to solve a

problem or work together

• Humans do it so why can’t computers?– Vision, cognition appear to be highly parallel activities

Page 8: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

8

Challenges

• Concurrent programs are harder to get right– Folklore: Need an order of magnitude speedup (or more) to be

worth the effort

• Some problems are inherently sequential– Theory – circuit evaluation is P-complete

– Practice – many problems need coordination and communication among sub-problems

• Specific issues– Communication – send or receive information

– Synchronization – wait for another process to act

– Atomicity – do not stop in the middle and leave a mess

Page 9: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

9

Why is concurrent programming hard?

• Nondeterminism– Deterministic: two executions on the same input will always

produce the same output

– Nondeterministic: two executions on the same input may produce different output

• Why does this cause difficulty?– May be many possible executions of one system

– Hard to think of all the possibilities

– Hard to test program since some cases may occur infrequently

Page 10: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

10

Traditional C Library for concurrency

System Calls

- fork( )

- wait( )

- pipe( )

- write( )

- read( )

Examples

Page 11: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

11

Process Creation

Fork( )NAME

fork() – create a new processSYNOPSIS

# include <sys/types.h># include <unistd.h>pid_t fork(void)

RETURN VALUEsuccess

parent- child pidchild- 0

failure-1

Page 12: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

12

Fork()- program structure

#include <sys/types.h>#include <unistd.h>#include <stdio.h>Main(){

pid_t pid;if((pid = fork())>0){/* parent */}else if ((pid==0){/*child*/}else {

/* cannot fork*}exit(0);

}

Page 13: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

13

Wait() system call

Wait()- wait for the process whose pid reference is passed to finish executing

SYNOPSIS

#include<sys/types.h>

#include<sys/wait.h>

pid_t wait(int *stat)loc)

The unsigned decimal integer process ID for which to wait

RETURN VALUE

success- child pid

failure- -1 and errno is set

Page 14: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

14

Wait()- program structure

#include <sys/types.h>#include <unistd.h>#include <stdlib.h>#include <stdio.h>Main(int argc, char* argv[]){

pid_t childPID;if((childPID = fork())==0){/*child*/}else {

/* parent*wait(0);

}exit(0);

}

Page 15: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

15

Pipe() system call

Pipe()- to create a read-write pipe that may later be used to communicate with a process we’ll fork off.

SYNOPSIS

int pipe(pfd)

int pfd[2];

PARAMETERPfd is an array of 2 integers, which that will be used to save the two file descriptors used to access the pipe

RETURN VALUE:0 – success;-1 – error.

Page 16: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

16

Pipe() - structure

/* first, define an array to store the two file descriptors*/int pipes[2];

/* now, create the pipe*/int rc = pipe (pipes); if(rc = = -1) {

/* pipe() failed*/perror(“pipe”);exit(1);

}

If the call to pipe() succeeded, a pipe will be created, pipes[0] will contain the number of its read file descriptor, and pipes[1] will contain the number of its write file descriptor.

Page 17: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

17

Write() system call

write() – used to write data to a file or other object identified by a file descriptor.

SYNOPSIS#include <sys/types.h>Size_t write(int fildes, const void * buf, size_t nbyte);

PARAMETERfildes is the file descriptor,buf is the base address of area of memory that data is copied from,nbyte is the amount of data to copy

RETURN VALUEThe return value is the actual amount of data written, if this differs from nbyte then something has gone wrong

Page 18: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

18

Read() system call

read() – read data from a file or other object identified by a file descriptor

SYNOPSIS

#include <sys/types.h>

Size_t read(int fildes, void *buf, size_t nbyte);

ARGUMENTfildes is the file descriptor,buf is the base address of the memory area into which the data is read, nbyte is the maximum amount of data to read.

RETURN VALUEThe actual amount of data read from the file. The pointer is incremented by the amount of data read.

Page 19: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

19

Solaris 2 Synchronization

• Implements a variety of locks to support multitasking, multithreading (including real-time threads), and multiprocessing.

• Uses adaptive mutexes for efficiency when protecting data from short code segments.

• Uses condition variables and readers-writers locks when longer sections of code need access to data.

• Uses turnstiles to order the list of threads waiting to acquire either an adaptive mutex or reader-writer lock.

Page 20: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

20

Windows 2000 Synchronization

• Uses interrupt masks to protect access to global resources on uniprocessor systems.

• Uses spinlocks on multiprocessor systems.• Also provides dispatcher objects which may act as

wither mutexes and semaphores.• Dispatcher objects may also provide events. An event

acts much like a condition variable.

Page 21: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

21

Basic question

• Maybe the library approach is not such a good idea?

• How can programming languages make concurrent and distributed programming easier?

Page 22: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

22

Language support for concurrency

• Help promote good software engineering• Allowing the programmer to express solutions more

closely to the problem domain• No need to juggle several programming models

(Hardware, OS, library, …)• Make invariants and intentions more apparent (part of

the interface and/or type system)• Allows the compiler much more freedom to choose

different implementations• Base the programming language constructs on a well-

understood formal model => formal reasoning may be less hard and the use of tools may be possible

Page 23: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

23

What could languages provide?

• Abstract model of system – abstract machine => abstract system

• Example high-level constructs– Communication abstractions

• Synchronous communication

• Buffered asynchronous channels that preserve msg order

– Mutual exclusion, atomicity primitives

• Most concurrent languages provide some form of locking

• Atomicity is more complicated, less commonly provided

– Process as the value of an expression

• Pass processes to functions

• Create processes at the result of function calls

Page 24: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

24

Design Issues for Concurrency:

1. How is cooperation synchronization provided?

2. How is competition synchronization provided?

3. How and when do tasks begin and end execution?

4. Are tasks statically or dynamically created?

5. Are there any syntactic constructs in the language?

6. Are concurrency construct reflected in the type system?

7. How to generate code for concurrency constructs?

8. How is the run-time system affected?

Page 25: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

25

Run-time system for concurrency

• Processes versus Threads

Operating System

Threads Fibres Process

thread library

Fibres are sometimes called green threads

Page 26: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

26

Multithreading in Java: multithreading models

Many-to-One model: Green threads in Solaris

Java application User space

(JVM) (Green threads)Kernel space

Kernel CPU

LWP

Page 27: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

27

Multithreading in Java: multithreading models

Many-to-One: Green threads in Solaris

• Multiple ULTs to one KLT

• Threads library is stored in Java Development Kit (JDK).

•Disadvantages :•One thread is blocked, all threads are blocked•Can not run on multiprocessors in parallel

Thread library is a package of code for user level thread management, i.e. scheduling thread execution and saving thread contexts, etc.. In Solaris threads library is called“green threads”.

Page 28: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

28

Multithreading in Java: multithreading models

One-to-One model: in Windows NT

Java Application User space

(JVM)

Kernel spaceKernel CPU

LWP

LWP

Page 29: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

29

Multithreading in Java: multithreading models

One-to-One model: in Windows NT

• One ULT to one KLT

• Realized by Windows NT threads package.

• The kernel maintains context information for the process and for individual thread.

• Disadvantage: The time of switching one thread to another thread at kernel level is much longer than at user level.

Page 30: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

30

Multithreading in Java: multithreading models

Many-to-Many model: Naive threads in Solaris

Java Application User spaceKernel space

Kernel

CPU

LWP

LWP

(Native threads)

Page 31: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

31

Multithreading in Java: multithreading models

Many-to-Many model: Native threads in Solaris

• Two level model or combined model of ULT and KLT

• In Solaris operating system, native threads library can be invoked by setting THREADS_FLAG in JDK to native environment.

• A user level threads library (Native threads), provided by JDK, can schedule user-level threads above kernel-level threads.

• The kernel only need to manage the threads that are currently active.

• Solve the problems in two models above

Page 32: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

32

Synchronization

• Kinds of synchronization:1. Cooperation

– Task A must wait for task B to complete some specific activity before task A can continue its execution e.g., the producer-consumer problem

2. Competition

– When two or more tasks must use some resource that cannot be simultaneously used e.g., a shared counter

– Competition is usually provided by mutually exclusive access (approaches are discussed later)

Page 33: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

33

Basic issue: conflict between processes

• Critical section– Two processes may access shared resource(s)

– Inconsistent behaviour if two actions are interleaved

– Allow only one process in critical section

• Deadlock– Process may hold some locks while awaiting others

– Deadlock occurs when no process can proceed

Page 34: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

34

Concurrent Pascal: cobegin/coend

• Limited concurrency primitive• Example

x := 0;cobegin

begin x := 1; x := x+1 end; begin x := 2; x := x+1 end;coend;print(x);

execute sequentialblocks in parallel

x := 0x := 2

x := 1

print(x)

x := x+1

x := x+1

Atomicity at level of assignment statement

Page 35: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

35

Mutual exclusion

• Sample actionprocedure sign_up(person)

beginnumber := number + 1;list[number] := person;

end;

• Problem with parallel executioncobegin

sign_up(fred);sign_up(bill);

end; bob fredbillfred

Page 36: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

36

Locks and Waiting

<initialze concurrency control>cobegin

begin <wait> sign_up(fred); // critical section<signal>

end;begin

<wait>sign_up(bill); // critical section<signal>

end;end;

Need atomic operations to implement wait

Page 37: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

37

Mutual exclusion primitives

• Atomic test-and-set– Instruction atomically reads and writes some location

– Common hardware instruction

– Combine with busy-waiting loop to implement mutex

• Semaphore– Avoid busy-waiting loop

– Keep queue of waiting processes

– Scheduler has access to semaphore; process sleeps

– Disable interrupts during semaphore operations

• OK since operations are short

Page 38: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

38

Monitor Brinch-Hansen, Dahl, Dijkstra, Hoare

• Synchronized access to private data. Combines:– private data

– set of procedures (methods)

– synchronization policy

• At most one process may execute a monitor procedure at a time; this process is said to be in the monitor.

• If one process is in the monitor, any other process that calls a monitor procedure will be delayed.

• Modern terminology: synchronized object

encapsulated state

interface

lock

Page 39: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

39

Java Concurrency• Threads

– Create process by creating thread object

• Communication– Shared variables– Method calls

• Mutual exclusion and synchronization– Every object has a lock (inherited from class Object)

• synchronized methods and blocks– Synchronization operations (inherited from class Object)

• wait : pause current thread until another thread calls notify• notify : wake up waiting threads• notifyAll

Page 40: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

40

Java Threads

• Thread– Set of instructions to be executed one at a time, in a specified order

• Java thread objects– Object of class Thread

– Methods inherited from Thread:

• start : method called to spawn a new thread of control; causes VM to call run method

• (suspend : freeze execution)

• (interrupt : freeze execution and throw exception to thread)

• (stop : forcibly cause thread to halt)

– Objects can implement the Runnable interface and be passed to a threadpublic interface Runnable {

public void run();

}

Page 41: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

41

Interaction between threads

• Shared variables– Two threads may assign/read the same variable

– Programmer responsibility

• Avoid race conditions by explicit synchronization!!

• Method calls– Two threads may call methods on the same object

• Synchronization primitives– Each object has internal lock, inherited from Object

– Synchronization primitives based on object locking

Page 42: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

42

Synchronization example

• Objects may have synchronized methods• Can be used for mutual exclusion

– Two threads may share an object.

– If one calls a synchronized method, this locks the object.

– If the other calls a synchronized method on the same object, this thread blocks until the object is unlocked.

Page 43: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

43

Synchronized methods

• Marked by keywordpublic synchronized void commitTransaction(…) {…}

• Provides mutual exclusion– At most one synchronized method can be active

– Unsynchronized methods can still be called

• Programmer must be careful

• Not part of method signature– sync method equivalent to unsync method with body

consisting of a synchronized block

– subclass may replace a synchronized method with unsynchronized method

– This problem is known as the inheritance anomaly

Page 44: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

44

Aspects of Java Threads

• Portable since part of language– Easier to use in basic libraries than C system calls

– Example: garbage collector is separate thread

• General difficulty combining serial/concur code– Serial to concurrent

• Code for serial execution may not work in concurrent sys

– Concurrent to serial• Code with synchronization may be inefficient in serial

programs (10-20% unnecessary overhead)

• Abstract memory model– Shared variables can be problematic on some implementations

– Java 1.5 has expanded the definition of the memory model

Page 45: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

45

C# Threads

• Basic thread operations– Any method can run in its own thread, i.e. no need to pass a

class implementing a run method

– A thread is created by creating a Thread object

– The Thread class is sealed – thus no inheritance from it

– Creating a thread does not start its concurrent execution; it must be requested through the Start method

– A thread can be made to wait for another thread to finish with Join

– A thread can be suspended with Sleep– A thread can be terminated with Abort

Page 46: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

46

C# Threads

• Synchronizing threads– The Interlock class

– The lock statement

– The Monitor class

• Evaluation– An advance over Java threads, e.g., any method can run its

own thread

– Thread termination cleaner than in Java

– Synchronization is more sophisticated

Page 47: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

47

Polyphonic C#

• An extension of the C# language with new concurrency constructs

• Based on the join calculus– A foundational process calculus like the -calculus but better

suited to asynchronous, distributed systems• A single model which works both for

– local concurrency (multiple threads on a single machine)– distributed concurrency (asynchronous messaging over LAN

or WAN)• It is different• But it’s also simple – if Mort can do any kind of

concurrency, he can do this

Page 48: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

48

In one slide:

• Objects have both synchronous and asynchronous methods.

• Values are passed by ordinary method calls:– If the method is synchronous, the caller blocks until the method returns some result

(as usual).

– If the method is async, the call completes at once and returns void.

• A class defines a collection of chords (synchronization patterns), which define what happens once a particular set of methods has been invoked. One method may appear in several chords.– When pending method calls match a pattern, its body runs.

– If there is no match, the invocations are queued up.

– If there are several matches, an unspecified pattern is selected.

– If a pattern containing only async methods fires, the body runs in a new thread.

Page 49: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

49

Extending C# with chords

• Interesting well-formedness conditions:1. At most one header can have a return type (i.e. be synchronous).2. Inheritance restriction.3. “ref” and “out” parameters cannot appear in async headers.

• Classes can declare methods using generalized chord-declarations instead of method-declarations.

chord-declaration ::= method-header [ & method-header ]* body

method-header ::= attributes modifiers [return-type | async] name (parms)

Page 50: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

50

A Simple Buffer

class Buffer {

String get() & async put(String s) {

return s;

}

}

•Calls to put() return immediately (but are internally queued if there’s no waiting get()).

•Calls to get() block until/unless there’s a matching put()

•When there’s a match the body runs, returning the argument of the put() to the caller of get().

•Exactly which pairs of calls are matched up is unspecified.

Page 51: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

51

OCCAM

• Program consists of processes and channels

• Process is code containing channel operations• Channel is a data object• All synchronization is via channels• Formal foundation based on CSP

Page 52: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

52

Channel Operations in OCCAM

• Read data item D from channel C– D ? C

• Write data item Q to channel C– Q ! C

• If reader accesses channel first, wait for writer, and then both proceed after transfer.

• If writer accesses channel first, wait for reader, and both proceed after transfer.

Page 53: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

53

Concurrent ML

• Threads– New type of entity

• Communication– Synchronous channels

• Synchronization– Channels

– Events

• Atomicity– No specific language support

Page 54: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

54

Threads

• Thread creation– spawn : (unit unit) thread_id

• Example code CIO.print "begin parent\n";

spawn (fn () => (CIO.print "child 1\n";));

spawn (fn () => (CIO.print "child 2\n";));

CIO.print "end parent\n“

• Result

end parent

child 2

child 1

begin parent

Page 55: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

55

Channels

• Channel creation– channel : unit ‘a chan

• Communication– recv : ‘a chan ‘a– send : ( ‘a chan * ‘a ) unit

• Example ch = channel(); spawn (fn()=> … <A> … send(ch,0); … <B> …); spawn (fn()=> … <C> … recv ch; … <D> …);

• Result

send/recv<C>

<A>

<D>

<B>

Page 56: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

56

CML programming

• Functions– Can write functions : channels threads

– Build concurrent system by declaring channels and “wiring together” sets of threads

• Events– Delayed action that can be used for synchronization

– Powerful concept for concurrent programming

• Sample Application– eXene – concurrent uniprocessor window system

Page 57: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

57

A CML implementation (simplified)

• Use queues with side-effecting functionsdatatype 'a queue = Q of {front: 'a list ref, rear: 'a list ref}

fun queueIns (Q(…)) = (* insert into queue *)

fun queueRem (Q(…)) = (* remove from queue *)

• And continuationsval enqueue = queueIns rdyQ

fun dispatch () = throw (queueRem rdyQ) ()

fun spawn f = callcc (fn parent_k =>

( enqueue parent_k; f (); dispatch()))

Source: Appel, Reppy

Page 58: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

58

Fortress

• Fortress STM

Page 59: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

59

Fortress Atomic blocks

Page 60: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

60

Software Transactional Memory

Locks are hard to get right• Programmability vs scalability

Transactional memory is appealing alternative• Simpler programming model• Stronger guarantees

• Atomicity, Consistency, Isolation• Deadlock avoidance

• Closer to programmer intent• Scalable implementations

Questions• How to lower TM overheads – particularly in software?• How to balance granularity / scalability?• How to co-exist with other concurrency constructs?

Page 61: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

61

Language issues in client/server programming

• Communication mechanisms– RPC, Remote Objects, SOAP

• Data representation languages– XDR, ASN.1, XML

• Parsing and deparsing between internal and external representation

• Stub generation

Page 62: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

62

Client/server example

The basic organization of the X Window System

A major task of most clients is to interact with a human user and a remote server.

Page 63: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

63

Client-Side Software for Distribution Transparency

• A possible approach to transparent replication of a remote object using a client-side solution.

Page 64: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

64

The Stub Generation Process

Interface Specification

Stub Generator

ServerStub

CommonHeader

Client Stub

ClientSource

RPCLIBRARY

ServerSource

Compiler / Linker

RPCLIBRARY

ClientProgram

ServerProgram

Compiler / Linker

Page 65: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

65

RPC and the OSI Reference Model

Application Layer

Presentation Layer (XDR)

Session Layer (RPC)

Transport Layer (UDP)

Page 66: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

66

Representation

• Data must be represented in a meaningful format.• Methods:

– Sender or Receiver makes right (NDR).

• Network Data Representation (NDR).

• Transmit architecture tag with data.

– Represent data in a canonical (or standard) form

• XDR

• ASN.1

• Note – these are languages, but traditional DS programmers don’t like programming languages, except C

Page 67: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

67

XDR - eXternal Data Representation

• XDR is a universally used standard from Sun Microsystems used to represent data in a network canonical (standard) form.

• A set of conversion functions are used to encode and decode data; for example, xdr_int( ) is used to encode and decode integers.

• Conversion functions exist for all standard data types– Integers, chars, arrays, …

• For complex structures, RPCGEN can be used to generate conversion routines.

Page 68: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

68

RPC Example

gcc

RPClibrary

-lnsl

clientclient.c

date_proc.c gcc date_svc

date.x RPCGEN

date_clnt.c

date_svc.c

date_xdr.c

date.h

Page 69: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

69

XDR Example

#include <rpc/xdr.h>

..

XDR sptr; // XDR stream pointer

XDR *xdrs; // Pointer to XDR stream pointer

char buf[BUFSIZE]; // Buffer to hold XDR data

xdrs = (&sptr);

xdrmem_create(xdrs, buf, BUFSIZE, XDR_ENCODE);

..

int i = 256;

xdr_int(xdrs, &i);

printf(“position = %d. \n”, xdr_getpos(xdrs));

xdrs

sptr

buf

Page 70: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

70

Abstract Syntax Notation 1 (ASN.1)

• ASN.1 is a formal language that has two features: – a notation used in documents that humans read

– a compact encoded representation of the same information used in communication protocols.

• ASN.1 uses a tagged message format: – < tag (data type), data length, data value >

• Simple Network Management Protocol (SNMP) messages are encoded using ASN.1.

Page 71: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

71

Distributed Objects

• CORBA• Java RMI• SOAP and XML

Page 72: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

72

Distributed ObjectsProxy and Skeleton in Remote Method

Invocation

object A object Bskeleton

Requestproxy for B

Reply

CommunicationRemote Remote referenceCommunication

module modulereference module module

for B’s class& dispatcher

remoteclient server

Page 73: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

73

CORBA

• Common Object Request Broker Architecture• An industry standard developed by OMG to help in distributed

programming• A specification for creating and using distributed objects• A tool for enabling multi-language, multi-platform

communication• A CORBA based-system is a collection of objects that isolates

the requestors of services (clients) from the providers of services (servers) by an encapsulating interface

Page 74: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

74

CORBA objects

They are different from typical programming objects in three ways:

• CORBA objects can run on any platform• CORBA objects can be located anywhere on the

network• CORBA objects can be written in any language that has

IDL mapping.

Page 75: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

75

Client ClientObject Implementation Object Implementation

ORB ORB

NETWORK

IDL IDLIDL IDL

A request from a client to an Object implementation within a network

Page 76: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

76

IDL (Interface Definition Language)

• CORBA objects have to be specified with interfaces (as with RMI) defined in a special definition language IDL.

• The IDL defines the types of objects by defining their interfaces and describes interfaces only, not implementations.

• From IDL definitions an object implementation tells its clients what operations are available and how they should be invoked.

• Some programming languages have IDL mapping (C, C++, SmallTalk, Java,Lisp)

Page 77: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

77

IDL File

IDL Compiler

Client StubFile

ServerSkeleton File

ClientImplementation

ObjectImplementation

ORB

Page 78: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

78

The IDL compiler

• It will accept as input an IDL file written using any text editor (fileName.idl)

• It generates the stub and the skeleton code in the target programming language (ex: Java stub and C++ skeleton)

• The stub is given to the client as a tool to describe the server functionality, the skeleton file is implemented at the server.

Page 79: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

79

IDL Example

module katytrail { module weather { struct WeatherData { float temp; string wind_direction_and_speed; float rain_expected; float humidity; }; typedef sequence<WeatherData> WeatherDataSeq interface WeatherInfo { WeatherData get_weather( in string site ); WeatherDataSeq find_by_temp( in float temperature ); };

Page 80: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

80

IDL Example Cont.

interface WeatherCenter { register_weather_for_site ( in string site, in WeatherData site_data ); }; };};

Both interfaces will have Object Implementations.A different type of Client will talk to each of theinterfaces.

The Object Implementations can be done in oneof two ways. Through Inheritance or through a Tie.

Page 81: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

81

Stubs and Skeletons

• In terms of CORBA development, the stubs and skeleton files are standard in terms of their target language.

• Each file exposes the same operations specified in the IDL file.• Invoking an operation on the stub file will cause the method to be

executed in the skeleton file• The stub file allows the client to manipulate the remote object

with the same ease with each a local file is manipulated

Page 82: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

82

Java RMI

• Overview– Supports remote invocation of Java objects– Key: Java Object Serialization

Stream objects over the wire – Language specific

• History– Goal: RPC for Java– First release in JDK 1.0.2, used in Netscape 3.01– Full support in JDK 1.1, intended for applets– JDK 1.2 added persistent reference, custom protocols, more support for

user control.

Page 83: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

83

Java RMI

• Advantages– True object-orientation: Objects as arguments and values– Mobile behavior: Returned objects can execute on caller– Integrated security– Built-in concurrency (through Java threads)

• Disadvantages– Java only

• Advertises support for non-Java• But this is external to RMI – requires Java on both sides

Page 84: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

84

Java RMI Components

• Base RMI classes– Extend these to get RMI functionality

• Java compiler – javac– Recognizes RMI as integral part of language

• Interface compiler – rmic– Generates stubs from class files

• RMI Registry – rmiregistry– Directory service

• RMI Run-time activation system – rmid– Supports activatable objects that run only on demand

Page 85: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

85

RMI Implementation

Java Virtual Machine

ClientObject

Java Virtual Machine

RemoteObject

Stub Skeleton

Client Host Server Host

Page 86: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

86

Java RMI Object Serialization

• Java can send object to be invoked at remote site– Allows objects as arguments/results

• Mechanism: Object Serialization– Object passed must inherit from serializable– Provides methods to translate object to/from byte stream

• Security issues:– Ensure object not tampered with during transmission– Solution: Class-specific serialization

Throw it on the programmer

Page 87: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

87

Building a Java RMI Application

• Define remote interface– Extend java.rmi.Remote

• Create server code– Implements interface– Creates security manager, registers with registry

• Create client code– Define object as instance of interface– Lookup object in registry– Call object

• Compile and run– Run rmic on compiled classes to create stubs– Start registry– Run server then client

Page 88: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

88

Parameter Passing

• Primitive types– call-by-value

• Remote objects– call-by-reference

• Non-remote objects– call-by-value

– use Java Object Serialization

Page 89: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

89

Java Serialization

• Writes object as a sequence of bytes• Writes it to a Stream• Recreates it on the other end• Creates a brand new object with the old data• Objects can be transmitted using any byte stream

(including sockets and TCP).

Page 90: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

90

Codebase Property

• Stub classpaths can be confusing– 3 VMs, each with its own classpath– Server vs. Registry vs. Client

• The RMI class loader always loads stubs from the CLASSPATH first

• Next, it tries downloading classes from a web server– (but only if a security manager is in force)

• java.rmi.server.codebase specifies which web server

Page 91: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

91

CORBA vs. RMI

• CORBA was designed for language independence whereas RMI was designed for a single language where objects run in a homogeneous environment

• CORBA interfaces are defined in IDL, while RMI interfaces are defined in Java

• CORBA objects are not garbage collected because they are language independent and they have to be consistent with languages that do not support garbage collection, on the other hand RMI objects are garbage collected automatically

Page 92: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

92

SOAP Introduction

• SOAP is simple, light weight and text based protocol• SOAP is XML based protocol (XML encoding)• SOAP is remote procedure call protocol, not object oriented

completely• SOAP can be wired with any protocol

SOAP is a simple lightweight protocol with minimum set of rules for invoking remote services using XML data representation and HTTP wire.

• Main goal of SOAP protocol – Interoperability

• SOAP does not specify any advanced distributed services.

W indow s

E-Commerce

Unix

MainFrame

SOAP

Page 93: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

93

Why SOAP – What’s wrong with existing distributed technologies

• Platform and vendor dependent solutions (DCOM – Windows) (CORBA – ORB vendors) (RMI – Java)

• Different data representation schemes (CDR – NDR)

• Complex client side deployment

• Difficulties with firewall Firewalls allow only specific ports (port 80), but DCOM and CORBA assigns port numbers dynamically.

• In short, these distributed technologies do not communicate easily with each other because of lack of standards between them.

Page 94: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

94

Base Technologies – HTTP and XML

• SOAP uses the existing technologies, invents no new technology.• XML and HTTP are accepted and deployed in all platforms.• Hypertext Transfer Protocol (HTTP)

– HTTP is very simple and text-based protocol.– HTTP layers request/response communication over TCP/IP. HTTP

supports fixed set of methods like GET, POST.– Client / Server interaction

• Client requests to open connection to server on default port number• Server accepts connection• Client sends a request message to the Server• Server process the request• Server sends a reply message to the client• Connection is closed

– HTTP servers are scalable, reliable and easy to administer.• SOAP can bind to any protocol – HTTP , SMTP, FTP

Page 95: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

95

Extensible Markup Language (XML)

• XML is platform neutral data representation protocol.• HTML combines data and representation, but XML contains just

structured data. • XML contains no fixed set of tags, and users can build their own

customized tags.<student>

<full_name>Bhavin Parikh</full_name><email>[email protected]</email>

</student>• XML is platform and language independent.• XML is text-based and easy to handle and it can be easily

extended.

Page 96: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

96

Architecture diagram

Client Application(CO M clien t or CO RBA clien t or

Java RM I client)

W eb ServicesDescriptionLanguage

XML Parser SOAP Library

Proxy Object

Server Application(CO M ob ject or CO RBA ob ject or

RM I O bject)

HTTP Server

SOAP Listener

SOAP Request

Calldirect

Call throughproxy

SOAP Response

SOAP = HTTP +XM L + RPCOR

SOAP = HTTPS +XM L + RPC

1. C lient call rem ote serviceusing SOAP

2. C lient can use proxy object tohide all SOAP details

M appingTool

4. Mapping tool m aps SOAP request torem ote serice

3. SOAP Listener can be im plem entedas ASP, JSP, CGI or SERVLET

XML Parser

SOAP Library

Page 97: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

97

Parsing XML Documents

• Remember: XML is just text• Simple API for XML (SAX) Parsing

– SAX is typically most efficient– No Memory Implementation!

• Left to the Developer

• Document Object Model (DOM) Parsing– “Parsing” is not fundamental emphasis.– A “DOM Object” is a representation of the XML document in

a binary tree format.

Page 98: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

98

Parsing: Examples

• SaxParseExample– “Callback” functions to process Nodes

• DomParseExample– Use of JAXP (Java API for XML Parsing)

• Implementations can be ‘swapped’, such as replacing Apache Xerces with Sun Crimson.

– JAXP does not include some ‘advanced’ features that may be useful.

– SAX used behind the scenes to create object model

Page 99: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

99

Web-based applications today

Presentation: HTML, CSS, Javascript, Flash, Java applets, ActiveX controls

Business logic: C#, Java, VB, PHP, Perl, Python,Ruby …

Database: SQL

File system

Application serverWeb serverContent management system

Operating System

Sockets, HTTP, email, SMS, XML, SOAP, REST, Rails, reliable messaging, AJAX, …Replication, distribution,

load-balancing, security, concurrency

Beans, servlets, CGI, ASP.NET,…

Page 100: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

100

Languages for distributed computing

• Motivation– Why all the fuss about language and platform independence?

• It is extremely inefficient to parse/deparse to/from external/internal representation

• 95% of all computers run Windows anyway

• There is a JVM for almost any processor you can think of

• Few programmers master more than one programming language anyway

– Develop a coherent programming model for all aspects of an application

Page 101: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

101

Facile Programming Language

• Integration of Multiple Paradigms– Functions

– Types/complex data types

– Concurrency

– Distribution/soft real-time

– Dynamic connectivity

• Implemented as extension to SML• Syntax for concurrency similar to CML

Page 102: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

102

Page 103: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

103

Facile implementation

• Pre-emptive scheduler implemented at the lowest level– Exploiting CPS translation => state characterised by the set of

registers

• Garbage collector used for linearizing data structures• Lambda level code used as intermediate language when

shipping data (including code) in heterogeneous networks

• Native representation is shipped when possible– i.e. same architecture and within same trust domain

• Possibility to mix between interpretation or JIT depending on usage

Page 104: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

104

Conclusion

• Concurrency may be an order of magnitude more difficult to handle

• Programming language support for concurrency may help make the task easier

• Which concurrency constructs to add to the language is still a very active research area

• If you add concurrency constructs, be sure you base them on a formal model!

Page 105: Languages and Compilers (SProg og Oversættere) Lecture 14 Concurrency and distribution

105

The guiding principle

• Provide better level of abstraction

• Make invariants and intentions more apparent – Part of the language syntax

– Part of the type system

– Part of the interface

• Give stronger compile-time guarantees (types)

• Enable different implementations and optimizations

• Expose structure for other tools to exploit (e.g. static analysis)

Put important features in the language itself, rather than in libraries