2
CONCURRENCY AND COMPUTATION: PRACTICE AND EXPERIENCE Concurrency Computat.: Pract. Exper. 2004; 16:109–110 (DOI: 10.1002/cpe.766) Special Issue: Compilers for Parallel Computers (CPC 2001) This special edition of Concurrency and Computation: Practice and Experience is dedicated to selected papers presented at the ninth Compilers for Parallel Computers workshop held in Edinburgh, U.K., in June 2001 (CPC 2001). All the papers have been revised and subjected to a rigorous reviewing process. CPC is an international workshop hosted in Europe every 18 months since 1989. CPC 2001 was hosted by the Compiler and Architecture Design Group within the Institute of Computer Systems Architecture (ICSA), School of Informatics, University of Edinburgh. CPC 2001 was the ninth in the series and the first to be held in Scotland. Previous workshops have been held in Oxford (1989), Paris (1990), Vienna (1992), Delft (1993), Malaga (1995), Aachen (1996), Vadstena (1998) and Aussois (2000). The aim of the workshop is to give researchers the opportunity to present their latest results in the field of compiling for parallel computers and to participate in informal discussions. The overall research focus of CPC is automatic performance optimization, and although CPC is a compiler centric workshop, related performance optimization work in tools, runtime systems, architecture and language design are also equally welcome. Presentation at the workshop was by invitation only and we tried to find a balance between good new researchers and more established figures. Current research areas change over time depending on the activities of the community and this is reflected in the papers submitted to this special issue. In the past, the optimization goal was minimizing time by exploiting large-scale parallelism but this has been broadened to other goals, e.g. efficient utilization of embedded systems. Of the 36 papers presented at CPC, 10 have been submitted and accepted for publication in this issue. The first four papers are concerned with compiling for parallel machines where communication management is critical. Compiling data-parallel programs for clusters of SMPs deals with the complex task of compiling for machines with mixed shared and distributed address spaces. It proposes a language based solution and demonstrates that this may be compiled to produce efficient code. Copyright c 2004 John Wiley & Sons, Ltd.

Special Issue: Compilers for Parallel Computers (CPC 2001)

  • View
    214

  • Download
    1

Embed Size (px)

Citation preview

Page 1: Special Issue: Compilers for Parallel Computers (CPC 2001)

CONCURRENCY AND COMPUTATION: PRACTICE AND EXPERIENCEConcurrency Computat.: Pract. Exper. 2004; 16:109–110 (DOI: 10.1002/cpe.766)

Special Issue:Compilers for ParallelComputers (CPC 2001)

This special edition of Concurrency and Computation: Practice and Experience is dedicated to selectedpapers presented at the ninth Compilers for Parallel Computers workshop held in Edinburgh, U.K., inJune 2001 (CPC 2001). All the papers have been revised and subjected to a rigorous reviewing process.

CPC is an international workshop hosted in Europe every 18 months since 1989. CPC 2001 washosted by the Compiler and Architecture Design Group within the Institute of Computer SystemsArchitecture (ICSA), School of Informatics, University of Edinburgh.

CPC 2001 was the ninth in the series and the first to be held in Scotland. Previous workshops havebeen held in Oxford (1989), Paris (1990), Vienna (1992), Delft (1993), Malaga (1995), Aachen (1996),Vadstena (1998) and Aussois (2000). The aim of the workshop is to give researchers the opportunity topresent their latest results in the field of compiling for parallel computers and to participate in informaldiscussions.

The overall research focus of CPC is automatic performance optimization, and although CPCis a compiler centric workshop, related performance optimization work in tools, runtime systems,architecture and language design are also equally welcome. Presentation at the workshop was byinvitation only and we tried to find a balance between good new researchers and more establishedfigures.

Current research areas change over time depending on the activities of the community and this isreflected in the papers submitted to this special issue. In the past, the optimization goal was minimizingtime by exploiting large-scale parallelism but this has been broadened to other goals, e.g. efficientutilization of embedded systems.

Of the 36 papers presented at CPC, 10 have been submitted and accepted for publication in thisissue. The first four papers are concerned with compiling for parallel machines where communicationmanagement is critical.

• Compiling data-parallel programs for clusters of SMPs deals with the complex task of compilingfor machines with mixed shared and distributed address spaces. It proposes a language basedsolution and demonstrates that this may be compiled to produce efficient code.

Copyright c© 2004 John Wiley & Sons, Ltd.

Page 2: Special Issue: Compilers for Parallel Computers (CPC 2001)

110 EDITORIAL

• Managing distributed shared arrays in a bulk-synchronous parallel programming environmentdevelops an efficient method of implementing irregular communication patterns in the BSPmodel.

• Data partitioning-based parallel irregular reductions presents new techniques to improveirregular reductions that increase the available parallelism exploited and reduce load imbalanceand communication.

• Group-SPMD programming with orthogonal processor groups introduces a new communica-tions library for programs where there are multiple independent tasks each of which is a dataparallel and executed in a SPMD manner.

The remaining papers span a broad range of subjects from memory consistency models tomultimedia instruction sets.

• A compiler for multiple memory models develops a novel technique that will compile a programfor different memory models freeing programmers from having to reason about differentconsistency models.

• Space–time mapping and tiling: a helpful combination describes how tiling can be consideredas a separate post parallelization optimization phase and, when combined with a cost model,produces an efficient implementation.

• The effect of cache models on iterative compilation for combined tiling and unrolling examinesthe use of static models to reduce the search space of iterative compilation and shows that goodresults are achieved with a small number of executions.

• A fast and accurate method for determining a lower bound on execution time presents a noveltechnique to determine the potential speedup available for different sections of a program as aguide to compiler optimization.

• Parallel object-oriented framework optimization demonstrates that high-level abstraction canimprove performance by combining additional knowledge with program transformation.

• SWARP: a retargetable preprocessor for multimedia instructions develops a user extendiblepreprocessor that exploits the extended idiosyncratic multimedia instruction sets available intoday’s embedded processors.

I would like to thank Alain Darte for his advice in planning the workshop and my PhD students,Grigori Fursin, Shun Long, Tom Ashby and Bjoern Franke, who ensured the smooth running of theevent.

MICHAEL O’BOYLE

ICSA, SCHOOL OF INFORMATICS, UNIVERSITY OF EDINBURGH, U.K.

Copyright c© 2004 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 2004; 16:109–110