6
Disaster Management Evaluation and Recommendation Vassilios Vescoukis and Nikolaos D. Dulamis National Technical University of Athens, 9, Heroon Polytechiou Str. Zografou, Athens 157 80, Greece, Email: {[email protected] , [email protected]} Abstract— It is clear that information technology plays an important role in facilitating disaster management and in allowing planners for a more efficient disaster handling. Today distributed architectures have been proposed in the area of environmental engineering which presents several advantages compared to the centralized frameworks. However, the current systems lack of methods that allow experts to dynamically construct, retrieve and exchange disaster management plans, stimulate orchestration between several simulation models according to the workflow (plan) constraints and recommends the most appropriate management plans according to the past experiences. This gap is addressed in this paper by proposing a novel architectural design framework, using the design principles of the Service Oriented Architecture (SOA) that allows interoperably description and construction of disaster management plans, easy service orchestration and execution as well as dynamic decision making and disaster management plan ranking. All this issues are evaluated in the use case of fire expansion framework. Keywords-component; interoperαbly disaster management description, service orchestration , dynamic decision making, fire natural disaster use case . I. INTRODUCTION It is clear that information technology plays an important role in facilitating disaster management and allowing planners for a more efficient disaster handling. Climate change can reasonably be expected to increase countries’ vulnerability to natural hazards in future. We are already witnesses of extreme meteorological phenomena, such as expanded fires and floods. Therefore, we require intelligent systems to identify needs, manage data, and help calibrate responses. Towards this direction intelligent Environmental Information Management (EIM) systems have been developed, able to collect, process, visualize and interpret geospatial data and workflows of added- value applications so as to intelligent support decision making in case of emergency [1], [2]. The traditional centralized approaches in environmental modeling present a series of limitations, in terms of scalability and interoperability, let alone the high cost to integrate data coming from diverse, heterogeneous and independent sources. This is why the focus of a centralized EIM System is mainly on the way of integrating the diverse components and technologies over a common platform, loosing the real target which should be the efficient and effective decision and disaster handling [3]. Ideally, an EIM should not just record the evolution of a natural phenomenon but primarily to suggest actions and plans that will lead to its control. To address these difficulties distributed architectures are proposed in the area of environmental engineering, like the work of [4], which is based on the Common Object Request Broker Architectural (CORBA) [5]. However, implementing CORBA is not a straightforward task. An alternative approach is the use of an open framework of a Service Oriented Architecture (SOA), or the use of Web services implementing through TCP/IP Internet protocols. This comprises a flexible set of design principles that enable easy integration and development of distributed environmental applications [6]. Towards this direction, the Open Geospatial Consortium (OGG) [7] has introduced an interface for implementing web services using geospatial data. The consortium has introduced several specifications that enable the creation of geospatial service oriented architectures, which, on the one hand, retain the main architecture principles of SOA, and, on the other, describe rules for handling, processing and managing geospatial information coming from diverse, heterogeneous and independent sources. The main interfaces are the Geography Markup Language (GML) [8], which is a representation schema of the geospatial data, the Web Feature Service (WFS) that allows clients to retrieve geospatial data encoded in GML and the Web Map Service (WMS) that provide a simple HTTP interface for requesting geo-registered data [9]. However, the aforementioned specifications and architectures lack of QoS provisioning, necessary for real-time implementation. Real-time or a least just-in-time is a crucial factor for handling a natural phenomenon since only under such a framework one can actually take proper and prompt actions to control the evolution of the disaster and minimize the damages. In [2], we present an environmental information system able to support real-time synchronization of geospatial data. The architecture is based on the design principles of the Service Oriented Architectures (SOA) exploiting OGC specifications. In this paper, we describe the architectural 2011 Third International Conference on Games and Virtual Worlds for Serious Applications 978-0-7695-4419-9/11 $26.00 © 2011 IEEE DOI 10.1109/VS-GAMES.2011.43 244

[IEEE 2011 3rd International Conference on Games and Virtual Worlds for Serious Applications (VS-GAMES 2011) - Athens, Greece (2011.05.4-2011.05.6)] 2011 Third International Conference

Embed Size (px)

Citation preview

Page 1: [IEEE 2011 3rd International Conference on Games and Virtual Worlds for Serious Applications (VS-GAMES 2011) - Athens, Greece (2011.05.4-2011.05.6)] 2011 Third International Conference

Disaster Management Evaluation and Recommendation

Vassilios Vescoukis and Nikolaos D. Dulamis

National Technical University of Athens,

9, Heroon Polytechiou Str. Zografou, Athens 157 80, Greece,

Email: {[email protected], [email protected]}

Abstract— It is clear that information technology plays an important role in facilitating disaster management and in allowing planners for a more efficient disaster handling. Today distributed architectures have been proposed in the area of environmental engineering which presents several advantages compared to the centralized frameworks. However, the current systems lack of methods that allow experts to dynamically construct, retrieve and exchange disaster management plans, stimulate orchestration between several simulation models according to the workflow (plan) constraints and recommends the most appropriate management plans according to the past experiences. This gap is addressed in this paper by proposing a novel architectural design framework, using the design principles of the Service Oriented Architecture (SOA) that allows interoperably description and construction of disaster management plans, easy service orchestration and execution as well as dynamic decision making and disaster management plan ranking. All this issues are evaluated in the use case of fire expansion framework.

Keywords-component; interoperαbly disaster management description, service orchestration , dynamic decision making, fire natural disaster use case .

I. INTRODUCTION

It is clear that information technology plays an important role in facilitating disaster management and allowing planners for a more efficient disaster handling. Climate change can reasonably be expected to increase countries’ vulnerability to natural hazards in future. We are already witnesses of extreme meteorological phenomena, such as expanded fires and floods. Therefore, we require intelligent systems to identify needs, manage data, and help calibrate responses. Towards this direction intelligent Environmental Information Management (EIM) systems have been developed, able to collect, process, visualize and interpret geospatial data and workflows of added-value applications so as to intelligent support decision making in case of emergency [1], [2].

The traditional centralized approaches in environmental modeling present a series of limitations, in terms of scalability and interoperability, let alone the high cost to integrate data coming from diverse, heterogeneous and independent sources. This is why the focus of a centralized EIM System is mainly on

the way of integrating the diverse components and technologies over a common platform, loosing the real target which should be the efficient and effective decision and disaster handling [3]. Ideally, an EIM should not just record the evolution of a natural phenomenon but primarily to suggest actions and plans that will lead to its control.

To address these difficulties distributed architectures are proposed in the area of environmental engineering, like the work of [4], which is based on the Common Object Request Broker Architectural (CORBA) [5]. However, implementing CORBA is not a straightforward task. An alternative approach is the use of an open framework of a Service Oriented Architecture (SOA), or the use of Web services implementing through TCP/IP Internet protocols. This comprises a flexible set of design principles that enable easy integration and development of distributed environmental applications [6]. Towards this direction, the Open Geospatial Consortium (OGG) [7] has introduced an interface for implementing web services using geospatial data. The consortium has introduced several specifications that enable the creation of geospatial service oriented architectures, which, on the one hand, retain the main architecture principles of SOA, and, on the other, describe rules for handling, processing and managing geospatial information coming from diverse, heterogeneous and independent sources. The main interfaces are the Geography Markup Language (GML) [8], which is a representation schema of the geospatial data, the Web Feature Service (WFS) that allows clients to retrieve geospatial data encoded in GML and the Web Map Service (WMS) that provide a simple HTTP interface for requesting geo-registered data [9].

However, the aforementioned specifications and architectures lack of QoS provisioning, necessary for real-time implementation. Real-time or a least just-in-time is a crucial factor for handling a natural phenomenon since only under such a framework one can actually take proper and prompt actions to control the evolution of the disaster and minimize the damages. In [2], we present an environmental information system able to support real-time synchronization of geospatial data. The architecture is based on the design principles of the Service Oriented Architectures (SOA) exploiting OGC specifications. In this paper, we describe the architectural

2011 Third International Conference on Games and Virtual Worlds for Serious Applications

978-0-7695-4419-9/11 $26.00 © 2011 IEEE

DOI 10.1109/VS-GAMES.2011.43

244

Page 2: [IEEE 2011 3rd International Conference on Games and Virtual Worlds for Serious Applications (VS-GAMES 2011) - Athens, Greece (2011.05.4-2011.05.6)] 2011 Third International Conference

interface and the real-time middleware able to support efficient representation and management for the geospatial content.

In addition, the architecture of [2] lacks the provision of an efficient framework for evaluating disaster management plans. To do so, we require new service engineering methods and signal processing tools able to allow users to construct, orchestrate, execute and evaluate different disaster management plans through the application of appropriate simulation methods. This paper describes an interoperable framework that allows users (disaster managers) to construct different workflows and exchange them under an architecture that guarantees interoperability, accessibility and highly exchangeability. For this reason, we incorporate tools that encode a workflow disaster management plan through the usage of the XPDL scheme, the XML process definition language [10]. XPDL allows storage and exchange of workflow diagrams to guarantee inter-exchangeability in the workflow design. This is very important since different expert groups are able to construct their own disaster management plans and exchange them under a highly interoperable framework. The architecture is also equipped with XML programming methods able to orchestrate the services under the workflow constraints. This is accomplished using the Business Process Execution Language (BPEL) which is a definition framework for web service orchestration [11].

The whole architecture is enhanced using intelligent recommendation techniques and decision making schemes able to dynamically select the most salient disaster management plans according to the past experiences in similar situations. The algorithm indicates a degree of importance to a list of evaluation metrics, extracted from the service orchestration and execution. In the following, the algorithm estimates the

significance of each evaluation metric exploiting the past experiences of the users along with their current interaction to the system.

This paper is organized as follows: Section II presents an overview of the proposed architecture for efficient decision making and ranking in case of disaster management. Section III describes the methods used for allowing experts to construct, retrieve, store and interoperablly exchange disaster management plans. In this section, we also present the methods for service orchestration according to the constraints set by the plans. In Section IV, we depict a dynamic recommendation architecture able to evaluate the disaster plans according to a previous experiences and by exploiting users’ interaction. Finally Section V concludes the paper.

II. SYSTEM OVERVIEW Figure 1 presents the main components of the proposed

architecture. This architecture has been designed to evaluate different management plans, applied to reduce the effects of a natural disaster, and support tools for a real-time decision making or at least a just-in-time decision. The architecture exploits concepts from serious games technologies and actively involves the user in the decision making loop, by incorporating adaptation mechanisms via users’ interaction that affects the simulation performance of plans used to handle environmental disasters. Communication among the other architectural layers is implemented through a Web service interface. This adoption allows easy integration among the diverse sensorial and often heterogeneous components of the architecture.

Figure 1. The proposed architecture for the efficient and effective disaster management evaluaiton.

245

Page 3: [IEEE 2011 3rd International Conference on Games and Virtual Worlds for Serious Applications (VS-GAMES 2011) - Athens, Greece (2011.05.4-2011.05.6)] 2011 Third International Conference

The proposed system consists of five main layers, the Construction, the Data Acquisition, the Simulation, the Presentation and the Adaptation Layer.

Construction Layer: The Construction Layer, on the one hand, provides the necessary information tools to the end-users for representing and interoperably describing plans and workflow so as to manage environmental disasters. On the other hand, this layer offers to the simulation layer the most appropriate orchestration issues for service (i.e., simulation models) execution. This is very important, since an integrated disaster management plan consists of several dependent actions (items), which are described by the workflow representation schema, imposing constraints on the way that these actions should be executed. Thus, we need tools for orchestrating the simulation models associated with the disaster management plans.

The construction layer allows users to create their own personalized disaster management plans, or retrieve them from a pool of plan. It also allows users to create their own service orchestration strategies for coordinating different dependent strategies of a disaster management scenario. The plans are interoperably described using the XDPL [10]-[13] framework. Therefore, any vendors’ engine that understands the XDPL can execute the workflow description. On the other hand, service orchestration is presented using the framework of the BPEL (Business Process Execution Language) [11], [14], [15], which is XML scripting for web services orchestration.

The Construction layer affects the simulation layer over which services (simulation models) are executed according to the orchestration of the construction layer. It also receives inputs from the Adaptation Layer so that, as we will describe in the respective paragraph (Adaptation Layer), the plans are evolved and modified taking into account users’ interaction.

Data Acquisition Layer: The Data Acquisition Layer is responsible for capturing and encoding geospatial information. The captured data are encoded using the OGC specifications for interoperability. Intelligent filtering methods are applied to the captured geospatial data to reduce the amount of information needed to be transmitted. Data filtering takes into account spatio-temporal reduction mechanisms according to the requirements of the network, terminal capabilities as well as the acquired properties of the sensorial data. In other worlds, spatio-temporal redundant information is not transmitted. More details regarding data acquisition layer and the architecture dealing with the real-time guarantees can be found in [2]. To guarantee a real-time performance, a middleware that exploits the Service Oriented Architecture (SOA) principles have been discusses in [2].

The Data Acquisition Layer communicates with the simulation layer offering the most appropriate real-time geospatial information needed in the simulations.

Simulation Layer: The goal of this layer is to execute several simulations, according to a workflow scenario and the respective service orchestration described in the Construction Layer. Simulation predicts future states regarding disaster evolution in accordance with a specific implementation plan.

Therefore, it is a very important aspect for evaluating the efficiency of a natural disaster implementation plan.

Simulation results are depicted in the 3D visualization schema of the proposed architecture.

Presentation Layer: The forecasted future states of the simulation layer are sent to the presentation layer to provide a 3D visualization of the expansion of the natural phenomenon with respect to the actions taken to control it, faster than real-time. In addition, in the presentation layer, we visualize the current state of the natural phenomenon according to the information received by the Data Acquisition Layer. Visualization is very important, since it allows easy interpretation of the effectiveness of the actions that have been already taken or provide a framework for assessing future disaster management plans.

Adaptation Layer: The role of this Layer is to adapt the simulation models parameters and the construction plans exploiting inputs from expert users via an interaction mechanism. In addition, adaptation layer updates the orchestration mechanisms which are important for an appropriate service execution. Simulation models adaptation is accomplished by taking into consideration the differences between the current state of a natural disaster phenomenon, as described by the real-time sensorial data (Data Acquisition Layer) and the predicted states by the simulation models (Simulation Layer). The adaptation layers allow users to interact with the system and update the performance of the simulation models according to the current conditions of the natural phenomenon. Adaptation is performed in an implicit framework by exploiting the outcome of the simulation and the real-life conditions or exploiting user’s interaction.

In the following, we mainly concentrate on the construction and recommendation interface of this architecture.

Figure 2. The Compoenets of the Construction Layer.

III. CONSTRUCTION LAYER Figure 2 presents the main procedural diagram of the

Construction Layer. Users are able to create their own disaster management plans, through the workflow construction interface, or to retrieve workflows from a database based on previous plans’ experiences. The workflows are represented using interoperable XML schemas. In addition, the user describes the orchestration issues for the service execution.

246

Page 4: [IEEE 2011 3rd International Conference on Games and Virtual Worlds for Serious Applications (VS-GAMES 2011) - Athens, Greece (2011.05.4-2011.05.6)] 2011 Third International Conference

Orchestration implies time constraints regarding service execution (execution of simulation models), which are then forwarded to the simulation layer. This is important for the guarantee of the real-time or the just-in-time performance.

A. Workflow Representation Schemas –Workflow Tools In the proposed architecture, we present a disaster

management plan using terminology from the workflow management that defines a business process as a network of atomic activities [10]. Each activity is a logical piece of work. Each activity may or may not have dependencies on other ones. Dependencies impose temporal constraints on the workflow execution, thus they have to be modeled either through the ordering of atomic activities, or explicitly, by a runtime clock. Anyhow, these dependencies are described in the process definition, thus they are considered static and a priori known.

In our implementation, the XML Process Definition Language (XPDL) is adopted for the interoperable representation of a disaster management plan [13]. XPDL is an XML representation schema able to describe business activities. It is a file format that represents the “drawing” of the process definition.

Figure 3. Sample process, modeled using the XPDL notation.

Each workflow, or disaster management plan, is represented by the elements of Activity and Transition using the terminology of the XPDL. Transitions indicate the flow among the activities. Figure 3 presents an example of the XPDL framework. In this example, we have described a workflow of nine activities. The first two activities are sequential executed. Activities T3, T4, and T5 are executed on parallel. However, an ‘AND’ gateway have been added to correlate the activities T3, T4, and T5 with the T2. Gateways are modeling elements that are used to impose control structures on the activities, as they converge and diverge within a process. The ‘OR’ gateway can be used to model decision branching points during the process. An OR gateway is evaluated to indicate which branch should be followed (e.g., T7, T8) while the process modeler can provide a default path to ensure that one condition is always evaluated as TRUE.

The goal of XPDL is to store and exchange the process diagram. It allows one process design tool to write out the diagram, and another to read the diagram, and for the picture that you see to be as similar as possible. XPDL can be used to carry the design workflow from one vendor to another vendor (experts able to handle a natural disaster). Therefore, using the XDPL framework, we achieve interoperability between different protection agencies that design specific disaster management plans and store them in the XPDL format.

B. Execution and Simulation Engines- Workflow Platform XPDL is an interchange format able to describe workflows

and consequently strategic plans used for handling and managing case of emergencies (see Figure 3). However, it does not guarantee the precise execution of the actions/ semantics described. BPEL is an “execution language” using the XML framework. As programming language BPEL has variables and operations. It has tools that make it easy to call multiple web services at the same time, and synchronize the results. It does not have any concepts to support the graphics of the diagram as the XPDL do. Thus, BPEL allows you to (i)Create the relationship between the business process and the outside world, (ii) Declare the data structures that a business process will use, (iii) Describe the procedural logic of the business process: sequences, parallel flows, conditionals, loops, receive and response events and (iv) Declare handlers that will be invoked when something goes wrong.

The goal of BPEL is to provide a definition of web service orchestration, the underlying sequence of interactions, the flow of data from point to point. BPEL is about the execution description framework of the workflow actions, defined in the XPDL. The actually service (simulation model) execution is accomplished at the simulation layer.

C. Service Platform BPEL is for orchestrating Web services. The BPEL

program format describes Web service interactions, not the Web services themselves. Service providers must define the Web services using the WSDL standard. A BPEL program must include a WSDL specification of the participating Web services.

When creating services for use within BPEL, it’s helpful to use BPEL’s WSDL extensions: add partner link definitions, and define properties and property aliases for important message and correlation values. BPEL refers to a external WSDL using the partnerLink tag. The external WSDL will contain the service and binding element. The targetNamespace of the external WSDL uniquely identifies it. This is presented in Figure 2 as an interaction between the WSDL service description and the BPEL language.

IV. SERVICE EXECUTION AND EVALUAITON The BPEL presentation framework is used for orchestrating

the different scenarios designed during the disaster management construction layer. This description is sent to the simulation layer for service (simulation model) execution and orchestration. Simulation estimates the efficiency of the application of this particular disaster management plan (as described by the workflow) and returns the results to the presentation layer for further examination by the experts.

Therefore for every workflow, say iw , we have the respective service orchestration description, io as well as the a set of evaluation parameters (metrics), iE . The workflows iw are represented using the XDPL framework, their respective orchestration using the BPEL schema, while the evaluation metrics are returned from the execution service engine that is from the simulation layer.

247

Page 5: [IEEE 2011 3rd International Conference on Games and Virtual Worlds for Serious Applications (VS-GAMES 2011) - Athens, Greece (2011.05.4-2011.05.6)] 2011 Third International Conference

The evaluation metrics are provided to the end-users (experts) for assessing the efficiency of the application of a particular disaster management scenario.

A. Plan Recommendation It is clear that the final decision of a disaster management

plan is to the end user (expert) responsibility. However, in this section, we present an intelligent recommendation system able to rank the significance of an evaluation metric and consequently of a disaster management plan.

In particular, let us assume that we are evaluated the ith workflow iw . Therefore, we have represent an XML programming orchestration schema, io for simulating the efficiency of the under examination plan. For every workflow, we have a set of evaluation metrics that assess, in an objective framework, the efficiency of the plan. Let us denote as

,...},,{ 321wiwiwi

i eeeE = the set of evaluation metrics for the ith workflow. The set contains evaluation parameters, which we denote as wi

je , j=1,2,..,N.

It is clear that each evaluation parameter wije have a

different degree of important to the performance efficiency of the plan. It is also clear that the degree importance highly depends on the profile of the experts, used for handling the disaster, as well as the adopted policy strategy. We formulate the degree of importance of each evaluation parameter as a

weight, say ja which corresponds to the wije . In this way, for

every set ,...},,{ 321wiwiwi

i eeeE = , there is also another set

,...},,{ 321 aaaAi = , which expresses the degree of importance of each evaluation metric to the total efficiency of the disaster management plan.

In the following, we describe a policy able to automatically

estimate the degree of importance ja . This is based through the exploitation of user’s preferences as interact with the architecture.

In particular, it is clear that a metric for evaluating a disaster management plan is given by the following equation

∑=j

jwj

jb aeA iminarg (1)

Equation (1) indicates that the value of each evaluation

metric is multiple by the weight factor ja . Then, the best disaster management plan is that of the minimum value of A ( bA refers to the minimum value of A). We assume that the evaluation metrics are monotonically increased with respect to the effect of the disaster. So, in case that a disaster produces a

significant impact on an evaluation metric, say the wije , the

value of this metric will be high ja and vice versa.

It is clear that using equation (1), we can estimate the minimum value of A in case that the weights wi

ja are known. In the following, we describe a intelligent framework for estimating the weights wi

ja .

Let us suppose that an expert have been selected, through the users’ interaction, K management plans as the most important ones. Let as denote as Sj a set that contains the values of the jth evaluation metrics for all selected as “good” disaster management plans.

}plans good"" as selected be to:{ iwjj weS i= (2)

Then, an efficient framework for selecting the degree of

important ja is through the variance of the values iwje over

the set Sj.

jwjw

jj Se

ea i

i∈∀=

)var(

1 (3)

Equation (3) means that evaluation metrics of high deviation of their values have not so significance to the total disaster metric evaluation than metrics of almost the same value.

Figure 4. A screenshot of the developed system.

V. USE CASE

A. Description of the System Implementation Scenario The developed system targets fire detection along with an

evolution monitoring so as to provide the authorities involved in fire forest management tools and plans that will lead to a cessation. The system has been implemented in real-world conditions in the prefecture of Messinia, is a peripheral unit of Greece in the Peloponnese. Messinia prefecture is an area of 2,991 km² with a population of 180,264 peoples and is located about 200 Km southwest of Athens. Screenshot of the real system implemented is shown in Figure 4. Apart from forest

248

Page 6: [IEEE 2011 3rd International Conference on Games and Virtual Worlds for Serious Applications (VS-GAMES 2011) - Athens, Greece (2011.05.4-2011.05.6)] 2011 Third International Conference

fire monitoring, value-added services such as vehicle routing excluding fire areas, are also implemented.

The platform has been designed to contribute to the early diagnosis and prevention of a fire incident in forest. Different types of measurements are captured for the sensor networks recording either the relative humidity, radiation, air, or even smoke in the covered area.

B. Constraints of the Platform We impose several relatively simple constraints on the

platform in order to make it work effectively. These constraints refer to knowledge that we assume will be available to the system during its operation. Firstly, the under controlled area is a priori know. This is very reasonable in our case, since we cannot protect forest on-the-fly, i.e., without being aware of the terrestrial particularities of the area. Estimations based on human observations can be unreliable and totally misleading both in under and over-estimating the situation. Secondly, the vegetation type, the density and the specific topography of the area should be known. Thirdly is the authorities' requirement to relatively evaluate the potential of one or more simultaneous forest fire incidents, so as to support their decisions on resource deployment and thus achieve minimal damage.

C. System Performance This approach is regarded as being quite beneficial because

the alarm comes totally from the environment, without human interference or estimations that are error prone. The system works even if the sensors get damaged. In this case, the alarm value will be propagated through the network and a sequence of urgent actions and reactions will take place. Furthermore, the total consumption is low increasing the maintenance and the sustainability of the system.

Interaction with experts’ users is allowed during a case of emergency. This way, we improve the evaluation of the system performance. 3D technologies are exploited for the depiction and presentation of the simulation of the results.

VI. CONCLUSIONS In this paper, we proposed a new architecture for

supporting efficient decision making in environmental situations. The architectures is based on the design principles of the Service Oriented Architecture (SOA) and incorporate signal processing algorithms able to rank the effectiveness of the application of a disaster management plan according to previous (past) experience derived from the user’s interaction and evaluation. The proposed system also incorporates methods that enable experts (end-users) to interoperably construct, store, exchange and retrieve disaster management plans. This is accomplished by exploiting the XPDL framework that allows interoperable description of workflows. In other words, we map the disaster management plans as actions and transitions among these actions in a workflow design framework. In the following, we introduce XML programming language able to orchestrate the different simulation models (services) according to the disaster

management plan constraints (workflow). Our scheme exploits the BPEL (Business Process Execution Language) principle for defining the way of the Web service orchestration.

The executed services (e.g., disaster simulation) are forwarded to the presentation layer. In this step, we incorporate an intelligent decision making algorithm able to adjust the degree of importance of several evaluation metrics according to the previous experience knowledge as provided through users’ interaction.

Our conclusions are that the proposed architecture significantly extends the state of the art in the area of efficient decision making in environmental information architectures through the provisioning of intelligent recommendation systems and interoperable plan construction, orchestration and finally evaluation methodologies.

VII. REFERENCES [1] A. Annoni, L. Bernard, J. Douglas, J. Greenwood, I. Laiz, M. Lloyd, Z.

Sabeur, A. M. Sassen, J. Jacques Serrano and T. Usländer “Orchestra: developing a unified open architecture for risk management applications,” Geo-information for disaster management, pp. 1-17. Springer, Berlin, 2005.

[2] V. Vescoukis, N. Doulamis, and S. Karagiorgou, “A Service Oriented Architecture for Decision Support Systems in Environmental Crisis Management,” Future Generation Computing Systems, Elsevier Press (to appear).

[3] R. Denzer, “Generic integration of environment decision support systems – stateof- the-art,” Environmental Modelling & Software Vol. 20, No.10, pp. 1217–1223 Elsevier Press, 2005.

[4] P. Papajorgji, H. W. Beck and J. Luis Brag, “An architecture for developing service-oriented and component-based environmental models” Ecological Modelling, Vol. 179, pp.61–76, 2004.

[5] M. Henning, S. Vinoski. Advanced CORBA Programming with C++. Addison-Wesley, ISBN 0-201-37927-9, 1999.

[6] I.N. Athanasiadis, “An Intelligent Service Layer Upgrades Environmental Information Management,” IEEE IT profesional Magazine, pp. 34-39, May/june 2006.

[7] Open Geospatial Consortium, OGC Web site, http://www.opengeospatial.org, last ccesed, on January 2011.

[8] Open Geospatial Consortium, “OpenGIS Geography Markup Language (GML) Implementation Specification,” available at http://www.opengeospatial.org/standards/gml, last accessed Jan. 2011.

[9] Open Geospatial Consortium, “OpenGIS Web Feature Service Implementation Specification, Available at http://www.opengeospatial.org/standards/wfs,” last accessed Jan, 2011.

[10] M.P. Wil van der Aalst, "Business Process Management Demystified: A Tutorial on Models, Systems and Standards for Workflow Management", Springer Lecture Notes in Computer Science, Vol 3098/2004.

[11] Xiang Fu, T. Bultan, Jianwen Su, “Analysis of interacting BPEL web services,” Proceeding of ACM WWW '04 Proceedings of the 13th international conference on World Wide Web, New York, NY, USA.

[12] WfMC. Terminology & glossary. WfMC Specification documents WFMC-TC-1011. Workflow Management Coalition, 1999.

[13] WfMC. XPDL - XML process definition language. WfMC specification documents. Workflow Management Coalition, 2005.

[14] Business Process Execution Language, Web site Available at http://www.oasis-open.org, last accessed January, 2011.

[15] P.Louridas, “Orchestrating Web Services with BPEL” IEEE software, Vol 25, No. 2, pp. 85-87, 2008.

249