12
GMPLS Control Plane, Policy-based Management, and Information Modeling Håkon Lønsethagen Anne-Grethe Kåråsen Telenor, R&D Snarøyveien 30, 1331 Fornebu, Norway hakon.lonsethagen|[email protected] Annikki Welin Ericsson Research, Sweden Torshamnsgatan 23, SE - 164 80 Stockholm, Sweden [email protected] Bela Berde Alcatel Reseach & Innovation Alcatel CIT Route de Nozay, 91460 Marcoussis Cedex, France [email protected] Abdelkader Hajjaoui Lucent Technologies, Bell Labs Larenseweg 50, Hilversum, 1221 CN Netherlands [email protected] AbstractGeneralized Multi-Protocol Label Switching (GMPLS) is an important enabling technology for the “All-IP” vision, but raises new challenges in multilayer network operations. The main goal of this paper is to investigate the policy-based management (PBM) approach applied to GMPLS network functionality, covering both packet switching and circuit switching technologies. We propose PBM mechanisms and policies to facilitate and improve the collaboration between the management plane and the GMPLS control plane, for the purpose of efficient GMPLS TE and service provisioning. We point out guidelines for policy information modeling, and we propose to extend the PCIM/e with Policy Event, to explicitly model the triggering of policy rules. This will increase automation and operational efficiency. We also recommend that the relationships between policy and management information and related support information must be explicitly captured and represented in a coherent modeling approach. The topic of GMPLS TE and service provisioning should be put on the IETF agenda. Keywords-component; GMPLS, control plane, policy-based management, TE, service provisioning I. INTRODUCTION AND OPERATIONAL INCENTIVES As services move to IP in Telecommunications networks around the world, demand for IP network bandwidth is constantly growing. To increase the capacity and improve traffic engineering (TE) Generalized Multi-Protocol Label Switching (GMPLS) [1][2][3] controlled core network are introduced. This raises new challenges in multilayer network operation [7]. This results in diverse local and global decision logic, distributed across multiple network elements and multiple network layers. While both GMPLS and Multi- Protocol Label Switching (MPLS) continue their respective expansion, we expect that unified environments will take progressive benefit of GMPLS. In the coming years we expect increased deployment of advanced MPLS functionalities in IP/MPLS networks such as various traffic engineering (TE) and QoS functions. In addition, operators now start to invest in lower-layer technologies with control plane (CP) solutions such as Ethernet, NG-SDH, and optical transport networks. Hybrid, multi-service, multi-technology network elements are also offered. The objective is to have the IP/MPLS capabilities interwork more closely with the underlying technologies in a dynamic, flexible, and cooperative manner. GMPLS provides the basis for multilayer TE for dynamic transport services, which put the network operator in a position to utilize network resources more adaptively to traffic demands. However, this also increases the network complexity, and thus, there is a need for an intelligent, efficient, and coherent management solution to make possible the realization of the potential of the TE capabilities brought by GMPLS. Such management plane – control plane interactions have been studied for the Automatically Switched Optical Network (ASON) framework [8]. The constant quest for lowering prices and improving services put great challenges on the network operator and service provider. They must adapt service provisioning and network management processes and solutions to the new situation and improve operational efficiency accordingly, since new services require more complex and dynamic configuration of the network resources. Policy-based (network) management (PBM) is promoted with great promises as an enabling solutionthat can tackle these challenges. By using PBM, the operator can avoid configuring the network nodes one-by-one, but rather entire network domains can be considered as a whole, and configured based on rules set by the operator. This will increase automation between the control and management planes, and enable more efficient and standardized operational processes in multi-vendor environments. PBM assumes that operational entities belonging to network functions are stateful and modeled by a state-machine. Classes and relationships

GMPLS Control Plane, Policy-based Management, and ...dutta.csc.ncsu.edu/.../student_presentations/GMPLS_Control_plane.pdf · GMPLS Control Plane, ... Ericsson Research, Sweden Torshamnsgatan

Embed Size (px)

Citation preview

Page 1: GMPLS Control Plane, Policy-based Management, and ...dutta.csc.ncsu.edu/.../student_presentations/GMPLS_Control_plane.pdf · GMPLS Control Plane, ... Ericsson Research, Sweden Torshamnsgatan

GMPLS Control Plane, Policy-based Management, and Information Modeling

Håkon Lønsethagen Anne-Grethe Kåråsen

Telenor, R&D Snarøyveien 30, 1331 Fornebu, Norway

hakon.lonsethagen|[email protected]

Annikki Welin Ericsson Research, Sweden

Torshamnsgatan 23, SE - 164 80 Stockholm, Sweden [email protected]

Bela Berde Alcatel Reseach & Innovation

Alcatel CIT Route de Nozay, 91460 Marcoussis Cedex, France

[email protected]

Abdelkader Hajjaoui Lucent Technologies, Bell Labs

Larenseweg 50, Hilversum, 1221 CN Netherlands [email protected]

Abstract— Generalized Multi-Protocol Label Switching (GMPLS) is an important enabling technology for the “All-IP” vision, but raises new challenges in multilayer network operations. The main goal of this paper is to investigate the policy-based management (PBM) approach applied to GMPLS network functionality, covering both packet switching and circuit switching technologies. We propose PBM mechanisms and policies to facilitate and improve the collaboration between the management plane and the GMPLS control plane, for the purpose of efficient GMPLS TE and service provisioning. We point out guidelines for policy information modeling, and we propose to extend the PCIM/e with Policy Event, to explicitly model the triggering of policy rules. This will increase automation and operational efficiency. We also recommend that the relationships between policy and management information and related support information must be explicitly captured and represented in a coherent modeling approach. The topic of GMPLS TE and service provisioning should be put on the IETF agenda.

Keywords-component; GMPLS, control plane, policy-based management, TE, service provisioning

I. INTRODUCTION AND OPERATIONAL INCENTIVES As services move to IP in Telecommunications networks

around the world, demand for IP network bandwidth is constantly growing. To increase the capacity and improve traffic engineering (TE) Generalized Multi-Protocol Label Switching (GMPLS) [1][2][3] controlled core network are introduced. This raises new challenges in multilayer network operation [7]. This results in diverse local and global decision logic, distributed across multiple network elements and multiple network layers. While both GMPLS and Multi-Protocol Label Switching (MPLS) continue their respective expansion, we expect that unified environments will take progressive benefit of GMPLS.

In the coming years we expect increased deployment of advanced MPLS functionalities in IP/MPLS networks such as various traffic engineering (TE) and QoS functions. In addition, operators now start to invest in lower-layer technologies with control plane (CP) solutions such as Ethernet, NG-SDH, and optical transport networks. Hybrid, multi-service, multi-technology network elements are also offered. The objective is to have the IP/MPLS capabilities interwork more closely with the underlying technologies in a dynamic, flexible, and cooperative manner. GMPLS provides the basis for multilayer TE for dynamic transport services, which put the network operator in a position to utilize network resources more adaptively to traffic demands. However, this also increases the network complexity, and thus, there is a need for an intelligent, efficient, and coherent management solution to make possible the realization of the potential of the TE capabilities brought by GMPLS. Such management plane – control plane interactions have been studied for the Automatically Switched Optical Network (ASON) framework [8].

The constant quest for lowering prices and improving services put great challenges on the network operator and service provider. They must adapt service provisioning and network management processes and solutions to the new situation and improve operational efficiency accordingly, since new services require more complex and dynamic configuration of the network resources. Policy-based (network) management (PBM) is promoted with great promises as an enabling solutionthat can tackle these challenges. By using PBM, the operator can avoid configuring the network nodes one-by-one, but rather entire network domains can be considered as a whole, and configured based on rules set by the operator. This will increase automation between the control and management planes, and enable more efficient and standardized operational processes in multi-vendor environments. PBM assumes that operational entities belonging to network functions are stateful and modeled by a state-machine. Classes and relationships

Page 2: GMPLS Control Plane, Policy-based Management, and ...dutta.csc.ncsu.edu/.../student_presentations/GMPLS_Control_plane.pdf · GMPLS Control Plane, ... Ericsson Research, Sweden Torshamnsgatan

represent the state of an entity, settings to be applied to an entity for both maintaining its state and moving to a new state, and rules for controlling the application of settings. PBM therefore allows network operators to control state changes for a given network function by enforcing settings through policy rules.

Historically, the PBM topics in focus by the standardization organizations have been related to provisioning and support of IP and DiffServ based QoS support and IPSec. The work on policies for MPLS TE was started [20], but the IETF was reluctant to pursue this work as it did not fit the working group charters at that time, and the efforts have to some extent stopped. Little or no considerations have been made for PBM related to GMPLS TE and service provisioning.

The main goal of this paper is to introduce the PBM approach for the GMPLS network functionality, covering both packet switching and circuit switching technologies [10]. We propose key PBM mechanisms to allow the management plane (MP) to collaborate with, and drive, the GMPLS control plane. The objectives are to review current work in this area, their strengths and weaknesses, and furthermore, to identify and present (different kinds of) policies and functionalities needed when widening the scope to GMPLS TE and service provisioning. We concentrate on policies for operational tasks such as admission control, signaling control, routing control, path calculation, and other TE actions. In particular, we propose to introduce the concept of Policy Event as part of the generic policy framework, which will enable adaptive control and closing of the control-loop. The result is twofold: increased automation and operational efficiency. The implications of policy events on the PBM architecture and information models are analyzed with respect to the different levels of the management hierarchy and corresponding levels of abstractions. Furthermore, based on this exploration and the policy examples, we provide recommendations on how to associate policy information with network and service management information in explicit ways.

The outline of this paper is as follows. Section II provides a review of important PBM concepts and modeling issues, while Section III provides an overview of GMPLS control plane (CP) components including their PBM related capabilities. Section IV presents an overview of the management information model for GMPLS managed entities, and in Section V we present two use cases exploring how policies are used in GMPLS TE and service provisioning. This section also identifies (informally) what we propose are needed Policy actions, eventually to be included in a policy information model. Section VI discusses policy information modeling issues and ways to capture important relationships between policy and management information. Section VII concludes the paper and points out further work.

II. PBM CONCEPTS, MODELLS, AND SOLUTIONS The purpose of introducing PBM is to increase the

flexibility and adaptability of the management system, as well as saving network operational costs and increasing the customers’ satisfaction. A PBM system allows adapting and changing the behaviors of the managed system at runtime to

adapt to new requirements as requested by the customer or the network administrator. Policies can be applied to most of the network and service management areas, such as network configuration, routing and fault management. Downloading new policies or configuration directives into the system can achieve the adaptation. After that, the network operation can only be executed after a validation of the new policies.

Another goal of PBM is the increase of the customer satisfaction. This can be achieved by facilitating the translation of the SLA into policies, which can be immediately downloaded into the system at runtime. It allows flexibility between the provider and the costume to agree on long and short term SLA.

The essence of PBM is that the intelligence of the management system can be manipulated by simple policy operations, such as adding/deleting and modifying the policy rules located in a policy repository, to adapt to the new requirements. This will not require hard and time-consuming system design, implementation, integration and installation.

A. The IETF PBM Framework The informational RFC 2753 [4] is a Framework for

Policy-based Admission Control for IP networks. The aim of the work was to support RSVP based signaling and admission control, but the DiffServ method was not excluded. The document discusses mechanisms for admission control decisions and provides terminology, requirements and architectural elements, namely PEP (Policy Enforcement Point) and PDP (Policy Decision Point). A PDP is a logical entity that makes policy decisions for itself or for other entities (e.g. network elements) that request such decisions. Policy decision involves both the evaluation of policy rule's conditions and the actions for policy rule enforcement, when the conditions of a policy rule are true. A PEP is a logical entity that enforces the policy decisions downloaded to the PEP as configuration directives.

The RSVP signaling has been extended to support policy-based admission control by allowing RSVP to convey policy related information [5]. In particular, this is useful for transferring policy information between administrative domains. These extensions use the POLICY_DATA object, and the handling of RSVP policy events is specified. However, the general notion of policy-based admission control does not imply that RSVP is used, or that RSVP is used with the POLICY_DATA object.

As an example, Figure 1 shows the important PBM functional blocks (PED, PEP) and the interaction between PEP and Integrated-services functions.

RFC 2753 discusses several topics, e.g. interaction with functions in Local Policy Decision Point (LPDP), bilateral agreements between carriers, priority based admission control, and prepaid calls. It provides a simplified view of how the Policy-based management should work with Int-Serv, and in an environment of IP routers. It does not cover interaction with network management or other control plane features such as routing.

Page 3: GMPLS Control Plane, Policy-based Management, and ...dutta.csc.ncsu.edu/.../student_presentations/GMPLS_Control_plane.pdf · GMPLS Control Plane, ... Ericsson Research, Sweden Torshamnsgatan

Figure 1 Policy-based admission control

A common open policy service protocol (COPS) was developed for conveying policy or configuration information between a policy server (PDP) and the node (PEP) [6].

Perceived from a general and logical level the IP Multimedia Subsystem (IMS, see e.g. 3GPP and ETSI TISPAN) and its service session control framework also borrow concepts from the IETF PBM framework. The IMS policy decision function implemented in the proxy call session control function block uses policy rules for call admission and the authorization of the corresponding usage of QoS resources.

B. Policy Information Models A general statement regarding policies is provided in [9]:

“Policies are used to control the state that a Managed Object is in at any given time; the state itself is modeled using an information model …”. While we explore managed object classes applicable to the GMPLS control plane in Section IV, in this section we point out main concepts and features related to the representation of policies.

Low-level protocol specific management and policy information bases (MIB and PIB respectively) [15][16], as well as high-level protocol neutral policy information models (PIM), have been developed for the purpose of enabling QoS differentiation in IP networks (QPIM) [14]. A general or common PIM has also been developed [12][13] (Policy Core IM with extensions, PCIM/e) from which domain specific models can be extended, like the way QPIM is derived from PCIM/e.

While a PIB specification such as the DiffServ QoS PIB [16], specifies PRovisioning classes (PRC) that can be considered as encoding types of configuration information for a device, the notion of policy rule is in general considered as a basic building block of a PBM system. A policy rule is the binding of a set of policy actions to a set of policy conditions [17]. The conditions are evaluated to determine whether the actions are to be performed. Thus, instances of policy based configuration directives (PRovisioning Instances (PRIs)) are

installed in PEPs, while policy rules are installed in repositories associated with PDPs or Local PDPs (LPDP).

The PCIM/e defines generic policy information entities using object-oriented modeling. Policy rules can be nested in groups and sub-groups. Policy actions and policy conditions can also be nested. The model provides great flexibility in how policy rules can be put together. Note that no policy rules as such are specified by the model. The rules are put together at deployment time, and the network operator has many challenging policy design choices to make when establishing policy rules. Several alternatives may exist for reaching the final policy goal. Rules can be made complex or simple, leading to variation in policy execution times.

While general constructs such as PolicyRule and PolicyGroup are expect to be used directly, a construct such as PolicyAction is expected to be refined by specifying subclasses in application-specific PIMs. Care must be taken so that application-specific PIMs identify only policy actions that are strictly needed, and not just any conceivable management or control action. Policy actions resulting in provisioning of configuration directives (cf. PIBs and PRCs) are evaluated at provisioning time. A policy action e.g. in a LPDP, triggered by some “policy event” is, on the other hand, evaluated directly following this trigger event.

Section VI further elaborates on policy information modeling issues. Our design goal is a flexible PBM solution allowing adaptation to varying conditions, both on short and long time scales. In particular, adaptation is achieved through the use of LPDPs as well as by explicit policy events.

In general, the network operator must consider his policies with respect to different levels of abstraction; from top-level SLA and business objectives; to network domain wide policies; to node level policies or configuration rules. This is challenging in the domain of QoS, and becomes even more challenging when considering other functional domains, where the policies must be related to network and service management information in more explicit ways. Moreover, this network and service management information must be handled from different perspectives; from a business and service perspective; from a system as well as a network or domain wide perspective; to a node level, as well as a deployment level, point of view. In general, management information provides an abstraction and representation of the entities in a managed environment, their properties, attributes and operations, and the way that they relate to each other. It is a challenge to make sure that network and service state information and corresponding control and management operations, as represented in the management systems, have and maintain a coherent mapping to the state and control mechanisms of local network elements and control plane resources.

C. Service Level Agreement/Specification and Policies A Service Level Agreement (SLA) is a contract between

two or more parties with the objective to reach a common understanding about service delivery, its quality and definition of the responsibilities between service provider and service consumer. An important part on an SLA is the Service Level

Page 4: GMPLS Control Plane, Policy-based Management, and ...dutta.csc.ncsu.edu/.../student_presentations/GMPLS_Control_plane.pdf · GMPLS Control Plane, ... Ericsson Research, Sweden Torshamnsgatan

Specification (SLS), which is the negotiated agreement that specifies the service from a technical level. An SLS has one or more service level objectives (SLO). An SLO is a set of parameters and their values in terms of individual metrics and operational data to enforce and/or monitor the SLA/SLS. Policies can be used for the actions of enforcing and reporting monitored compliance. These policies will include the conditions that fulfill the service properties defined in the SLA. In this way policies can be used to configure a "service" in a network or on a network element/host, invoke its functionality, and/or coordinate services in an inter-domain or end-to-end environment [17].

Furthermore, by introducing SLAs into policy-based management the objective is to increase flexibility and dynamicity as well as the level of automation. This can be achieved by facilitating the automated translation of the SLSs/SLOs into policies.

For a GMPLS network, an SLA and the corresponding policies will play an important role for facilitating the LSP provisioning, routing protocols, network restoration, and the multi-layer and multi-domain environments. They enable the ability to establish LSPs with desirable resilience, and allow flexibility for re-optimization when needed. Thus, GMPLS related attributes will be needed for the resilience level of the LSP, the transparency level of the optical LSP, and the monitoring interval levels of the LSP. As a result, SLAs and policies can facilitate the customized verification of service fulfillment and service assurance.

III. CONTROL PLANE AND PBM ARCHITECTURE GMPLS extends MPLS to support five classes of interfaces: Layer-2 Switch Capable (L2SC), Time-Division Multiplex (TDM) capable, Lambda Switch Capable (LSC), and Fiber Switch Capable (FSC), in addition to Packet Switch Capable (PSC) interfaces already supported by MPLS. While keeping the resource-initiated resource allocation mechanism of MPLS, the control plane is thus re-architected to explicitly consider the opportunity of integrating packet and circuit switching technologies under a unified GMPLS control plane. See Figure 2 below.

Figure 2 Multilayer core network architecture with single

GMPLS control plane.

This leads to an increased amount of network-wide operational objectives and management complexity in multilayer networks. However, the potential exists to lower the complexity of the networks operational exploitation. Such a change enables the explicit separation of control logic

implementation from the Label Switching Router (LSR) that implements data plane functions. Further, the GMPLS control plane software architecture is designed with the following three fundamental principles:

• Separation between protocol-specific and applications-specific mechanisms

• TE link as a unique application-specific entity • Two-stage OSPF architecture and database

A. Separating Protocol-generic Mechanisms from Application-specific Mechanisms In the present architecture, the term application refers to

IntServ, Resource Reservation Protocol Traffic Engineering extensions (RSVP-TE) for MPLS, GMPLS RSVP-TE, etc. Protocol-generic mechanisms are shared by all application level entities. The layered architecture of the control plane of an LSR is shown in Figure 3. At the bottom, the protocol-generic mechanisms comprise the protocols used for performing the network resource allocation control functionality, typically RSVP-TE and Open Shortest Path First extensions in support of GMPLS (OSPF-TE). These mechanisms make network state information available to the higher layer. Protocols are accessed through dedicated interfaces by the application level entities. These entities form a set of controllers for information processing. Resource allocation state is thus maintained periodically by refresh messages, in the soft-state model, by the protocol level. OSPF neighbor relationship maintenance, OSPF Link-state Advertisement (LSA) reliable flooding, RSVP acknowledgement, and RSVP Path/Resv states refresh, are implemented once in the lower part of the software architecture, often referred to as protocol stack. For instance, the introduction of a Link Management Protocol (LMP) module as part of the control plane software can easily be designed as an extension to the existing layered architecture, and seamlessly integrated as part of the control plane functionality, rather than designing something fully distinct and much more complex to be merged.

B. TE Link as Unique Application-specific Entity In turn, the design greatly simplifies the TE resource

control and related mechanisms. As any controlled entity is a designated TE link (i.e. LSP, FA-LSP (FA link), un/bundled TE link), the simplification and flexibility resulting from this software architecture is such that the only processed TE-related entities for the application level are TE links. Indeed, the control plane software handles any TE entity as a TE link in the Traffic Engineering Database (TEDB), making use of a fully recursive definition of TE links. The GMPLS control plane can now be organized around the Traffic Engineering resource controller (TE controller) interacting with other controllers, e.g. the Signaling controller, with direct access to the IP control channels (IPCC). The TE controller also processes TE, topology, and reachability information from multiple Switching Capabilities (SC), without any specialization, in multilayer environments. Moreover, the control plane can still make use of advanced features (including two step increments, fast convergence, hitless restart, redundancy) without impacting its generality. In addition, the control plane software

Page 5: GMPLS Control Plane, Policy-based Management, and ...dutta.csc.ncsu.edu/.../student_presentations/GMPLS_Control_plane.pdf · GMPLS Control Plane, ... Ericsson Research, Sweden Torshamnsgatan

also benefits from low-level tracing capabilities - to be enabled/disabled on demand, depending on the running conditions.

The only processed entities for the TE Controller and Path Computation modules, namely TE links, are defined as resource aggregates that are encoded as links with TE attributes. For instance, an unbundled component link comprising a single data link is represented as a single TE link, an unbundled component link comprising multiple data links is represented as a single TE link, and a bundled link comprising multiple component TE links is also represented as a single TE link. Further, the control plane software processes any FA link following the exact same construction rules, allowing for very flexible integration of the FA capability for multilayer network support.

C. Two-stage OSPF Architecture and Database In GMPLS, the OSPF-TE engine is extended with opaque

LSA capabilities as well as an API for external (higher-level) applications (Figure 3). The OSPF-TE module provides functions to exchange opaque LSAs between LSRs and processing of the TE LSAs, given that the GMPLS application layer can generate opaque LSAs. The routing protocol is invoked to flood the data to neighbors within the flooding scope. At the protocol level, the OSPF(-TE) database feeds a topology LSA database that contains the received raw LSA packets for flooding purpose by the OSPF-generic protocol stack. The separate TEDB (from the LSA Database) thus includes only pre-processed LSA, and therefore prevents the re-processing of every TE LSA whenever a Constrained Shortest Path First (CSPF) is run. The update of the TEDB from the LSA Database is performed in an asynchronous way with a flow control. The TEDB is also used to store local component TE links that comprise a set of one or more data links. However, these TE links are not advertised. Only the TE link bundles are advertised. The same mechanism is foreseen for FA-LSPs stored as FA links in the TEDB.

Figure 3 Policy-enabled GMPLS control plane architecture.

D. PBM Architecture for GMPLS Networks For GMPLS, the classical PBM implementation is revisited

and designed such that the Admission Control module is decoupled from the legacy local policy agent, called now Policy Controller Agent. The latter would then serve as a global application level policy agent, keeping track of policies loaded locally and applying to the Admission Controller, the Signaling Controller, and the TE controller. Another important revisited implementation aspect concerns the functionality implemented as part of the Admission Controller; the latter should not only be used for controlling incoming signaling requests (policy-based and/or resource-based admission control through RSVP-TE), but also for controlling any incoming TE routing information exchanged by means of OSPF-TE and/or LMP. This recommended architecture is depicted in Figure 3.

IV. GMPLS MANAGED ENTITIES Above we have pointed out how, in particular in the area of

GMPLS TE and service provisioning, there is a close relationship between policy information and management information. In this section we point out main features of the NOBEL information model (IM). The NOBEL IM specifies managed entities that represent the control plane (CP) itself, its components and capabilities, how CP components are interconnected, and how they interwork. Loosely speaking, this has been identified as the CP-C model area in [18]. The model area representing the transport plane resources, topologies and capabilities as viewed by the CP, taking into account multi-layer switching capable NEs, was termed the CP-T model. However, specific information modeling for the CP-T area has been left for further study in the next phase of NOBEL.

The central element of the model is the CP Element, which represents a node level CP instance hosted by a CP node. It contains several managed entities representing the management view of various CP functionalities and capabilities, such as the TE controller and the routing and signaling controllers. These managed entities are shown in the UML diagram of Figure 4.

Note that the model does not explicitly differentiate between managed entities representing CP components or entities for the management of the CP network itself vs. CP components or entities for the management of the transport plane. Our approach so far is that this will be reflected on the instance level only. For example, there will be separate instances of RoutingController and CPElement related to the CP network vs. instances of RoutingController and CPElement related to the control of the transport plane.

The management view of the TE, routing and signaling application level logic and processing is provided by the TE Controller (TEC), Routing Controller (RC), and the Signaling Controller (SgC), respectively. These managed entities have attributes and operations controlling the behavior of the corresponding controller processes. However, the setting of these attributes can also be achieved by policies. Signaling and routing adjacencies are also represented in the model.

Page 6: GMPLS Control Plane, Policy-based Management, and ...dutta.csc.ncsu.edu/.../student_presentations/GMPLS_Control_plane.pdf · GMPLS Control Plane, ... Ericsson Research, Sweden Torshamnsgatan

C PD om ain

R CD o ma in

1 . .n

1

R ou tin gP rotoc o lCo ntro ller Sign a lling Pro to co lC on tro ller

Ro ut in gC on tro ller

1

1. .n

u se s u ses

C a llAd miss io nC o ntrolle r

C o nn e ct io nAd m issi onC o ntroller

P o li c yM an ag e r po li cy R e qu e s t

p olic y R e q ue s t

T E C on tro lle r

us e s .u se s

d ire c ts d ire c ts

p o licy R eq u e st

C PN etwo rk

1 .. nu s e s

C PElem en t

0 . .n 0 . .n

0 ..n

1 0 . .n 0 . .n 0 . .n

0 . .1

0 . .1

C PNo de

1 .. n

1

1 .. n

1

1..n

1

1..n 0 .. 1

R ou ti ng Ad ja ce n cy 1. .n 1. .n

+ a E nd +z E n d

S ign allin g Adja ce n cy

1 . .n

+a E n d + zE nd

L mp Ad ja cen c y

1..n1..n

1

0..n

+a En d1

0 ..n

+ zE nd1

LM Protoc o lC ontro llerS ig nallin gC on tro ller

Figure 4 Managed entities representing CP Element and components

The scope of the CPE is according to the scope of its contained routing controllers (RC) and signalling controllers (SgC). A CPE may be associated with several layer networks, even of different adjacent switching capabilities. One RC or one SgC cannot be associated with several CPEs. A role of the CPE is to control the exchange of information among RCs and SgCs respectively. On the other hand, the routing and signalling protocol controllers provide a management representation of the protocol level processes. Their general behaviour is controlled primarily by the application level entities. However, monitoring capabilities and control of protocol level details are included in protocol controllers. The exact inheritance and sub-classing structure for protocol controllers is for further study.

The CP Domain and the Routing Control Domain are used for structuring the managed entities into groups according to operator preferences. This will correspond to the way the network is structured by the operator into areas or domains for various operational and administrative reasons. By such high-level entities, management operations can be specified accordingly at a high level, enabling automation, increased efficiency and scalability. Such managed entities are applicable to management systems dealing with network level management information.

V. POLICIES FOR GMPLS TE AND SERVICE PROVISIONING The policy framework developed for the IP/DiffServ

network model, is built around network-wide policies for the control of QoS enabled flow aggregation and forwarding [14]. The early work on PBM and policies for MPLS focused on mapping of traffic flows onto LSPs (including mapping of DiffServ traffic) as well as lifecycle management and routing

of LSPs, with functionalities such as signaling, resource control, and admission control. (see e.g. [21]). The mapping between IP packets (flows) and an LSP must take place at the ingress LSR by specifying a Forward Equivalence Class (FEC) to a classifying label. A FEC is defined as a group of packets that can be treated in an equivalent manner for purposes of forwarding. FECs can be defined at different levels of granularity (source, destination, port level).

The correspondingly proposed MPLS policy information model [19][20] extends the policy information elements for IP/DiffServ by considering traffic engineering and QoS constraints and related policy actions for MPLS. Therefore, these policy actions refine both PCIM/e and QPIM. The MPLS policy model identifies among other things FEC, LSP, and related traffic profiles. Note that the IETF work on PBM for MPLS stopped, as no working group charters were modified to include this topic.

Next, when broadening the scope to GMPLS TE and service provisioning, the main challenge is, on one side, the definition of policy mechanisms at LSRs to better perform (multilayer) TE and service provisioning and thus improve network efficiency. On the other side, adaptive and cooperative mechanisms, enabling the network to better adjust to the specific TE constraints brought on by those (end-to-end) services, must be defined as well.

While policy mechanisms have emerged in response to operators’ needs for MPLS, it can be argued that the lack of an overall PBM framework has hindered the operational efficiency and finally the deployment of the GMPLS protocol stack. Indeed, the lack of attention to the completion of the progressing work on (G)MPLS policies represents a barrier to

Page 7: GMPLS Control Plane, Policy-based Management, and ...dutta.csc.ncsu.edu/.../student_presentations/GMPLS_Control_plane.pdf · GMPLS Control Plane, ... Ericsson Research, Sweden Torshamnsgatan

future operational exploitation of control plane driven networks.

Pursuing the started development path, NOBEL participants believe it is important that the research community also pursues the design, evaluation, and deployment of policy framework for GMPLS.

A. Use case: Call and connection setup via UNI This use case describes call setup in a circuit switching

capable GMPLS network by the means of UNI signaling. The use case includes the setting up of supporting connectivity. Logically, two separate procedures are involved, i.e. a call setup procedure followed by the procedure of setting up one or more supporting connections. However, whether this is implemented by using a combined call/connection setup message, or by using two separate signaling messages, has not been considered.

The UNI represents a service boundary between the client IP routers and the GMPLS network. The services offered over the UNI are: Call setup, call deletion, call modification, call status exchange, and service discovery.

The use case is restricted to describing the ingress side of the call setup. The procedures in the egress GMPLS node are not described. Furthermore, inter-domain issues such as signaling of call setup across the E-NNI are not part of this use case.

The use case environment consists of a set of IP based access nodes (i.e. client nodes) supporting UNI-C functionality, and a circuit switching capable GMPLS core network. The use case is described in a protocol neutral way, although RSVP with appropriate TE and GMPLS extensions currently seems to be the most popular choice of signaling protocol.

In the following, numbers in brackets refer to the interactions depicted in Figure 5.

Preconditions for this use case

• SLA/SLS information is provided from the Service Management System – this information has to be adapted to and installed in the Policy and service admission repository

• SLS information, including traffic profiles, is established for each client. This information is downloaded and stored in an LPDP repository, to be used when requests for (call related) network connectivity are received from the clients. The SLS may contain information related to e.g. number of calls (total, time dependant, towards specific destinations), maximum capacity per call, total capacity, QoS parameters, etc.

• Call admission directives are downloaded from the centralized call admission PDP to the PEPs residing in the GMPLS network edge nodes [0a].

• Client and node specific connection admission policies are downloaded from the centralized connection admission PDP to the LPDPs residing in the GMPLS network edge nodes [0b].

Use case description

1. A client node requests a call setup by signaling the appropriate call setup message over the UNI into the GMPLS network [1].

2. The request is checked by the PEP by comparing the Client ID and port with its call admission directives. If the call request is accepted with respect to call admission policies, the part of the request concerning the setup of supporting connectivity has to be evaluated (see step 3 below). If the call admission policy rules do not permit the call setup to proceed, the call setup request is rejected by the PEP, and the client is informed [1b].

SgC

Call Admission

PDP

CP

SLA info

[1] Request

[0a]

Connection admission

PDP

[4]

Local PDP

[2b]

General Connection admission policy rules

[6] Modified request

SLS info

Client and node specific connection admission policy rules

Client negotiation policy rules

Provider Edge Node

Client Edge Node

PEP

[2a]

[0b]

[1b] Reject

Path selection policy rules

TEC

[5]

CP

Policy and service admission repository

Call admission policy rules

TEDB

[3] Negotiate

Signaling control policy rules

Figure 5 Combined call and connection setup via UNI

Page 8: GMPLS Control Plane, Policy-based Management, and ...dutta.csc.ncsu.edu/.../student_presentations/GMPLS_Control_plane.pdf · GMPLS Control Plane, ... Ericsson Research, Sweden Torshamnsgatan

3. The call request will include parameters describing requested call characteristics, e.g. QoS and capacity, These call level parameters translate into various, network resource related, requirements when the supporting connectivity is to be established in the GMPLS network, e.g. requirements related to recovery schema and bandwidth. The PEP requests the LPDP to evaluate the client and node specific connection admission policy rules [2a]. These policy rules apply to a) node specific resource usage and limitations, and b) limitations in the clients SLS / traffic profile. Furthermore, if needed, the PEP requests the centralized connection admission server/PDP, via the LPDP [2b], to evaluate the General connection admission policy rules. These policy rules apply to the network wide resource usage.

4. If the call setup request is not in accordance with the client SLS/traffic profile, the request is rejected.

5. If the request cannot be met because of GMPLS network or node limitation, although the request itself contains legal parameters, the action depends on the individual SLSs and the Client negotiation policy rules:

• The request may be rejected because the GMPLS network cannot supply the requested service [1b].

• The call setup parameters may be renegotiated so that the client may receive a service with e.g. reduced QoS or capacity, in line with what the network is able to deliver at that time [3].

6. This situation will require other TE actions as well to improve the resource situation in the GMPLS network, but this is not part of the call setup use case.

7. If the request is acceptable, the initiation of connection setup is delegated to the Traffic Engineering Controller (TEC). The TEC requests the LPDP for Path selection policy rules [4]. Path selection policy rules may e.g. be related to choice of transmission technology (e.g. if the call terminates in own network, use TDM, else use wavelength), administrative constraints such as link color, rules for divers routing if the call is supported by more than one connection, etc. Based on constraints derived from the setup request and the Path selection policy rules, and on information contained in the Traffic Engineering Database (TEDB), paths that satisfy the request are computed and selected.

8. When one or more paths have been selected for the supporting connections, the actual call setup signaling is delegated to the Signaling Controller (SgC). The SgC requests the LPDP for Signaling control policy rules [5]. Signaling control policy rules may e.g. be related to choice of signaling protocol within the GMPLS network, whether crankback shall be used, addressing of failure notifications, etc.

9. The GMPLS ingress node signals the modified call setup request [6].

Additional control plane functionality is related to the process of setting up a call (and supporting connections) at the ingress node of the network. However, as this functionality is

not related to policy-based aspects, it has not been included in the use case described above.

B. Use case: Event-driven TE policy action The TE Link (utilization) threshold crossing event use case

covers the case where a TE Link emits a Threshold Crossing Alert (TCA) because the TE Link resource usage has crossed a set threshold. The actions that may result from such an event are described in the use case.

An example case would be that more than a predefined percentage (e.g. 85 %) of the current FA PSC link unreserved bandwidth has been consumed, resulting in a TCA being emitted from the TE Link. Given the integrated/unified model, the perfective TE control action may correspond to the triggering of a:

• New FA PSC LSP, such that the occupancy ratio will be at least 50% and still satisfy the nested LSP traffic parameters (intrinsic) and service parameters (extrinsic) constraints.

• New FA TDM LSP, e.g. at the server layer. The FA selection process obeys rules conforming to the local unreserved bandwidth utilization, while the processing of the remaining percentage of bandwidth at the PSC layer is open to further policing. It is expected that this bandwidth will be used for satisfying small granularity LSP requests.

Preconditions for this use case • The PEP / LPDP maintains a stack of policies, and has

a local policy repository. • The policies are downloaded by some management

system. • TE Link utilization thresholds are already set.

Use case description

1. A TE Link emits a Threshold Crossing Alert (TCA) to the TEC [1a] and the MP [1b]. [1a] is a CP internal signal, while [1b] is a CP-MP interaction (notification). See Figure 6. 2. The TEC detects the TCA and requests the PEP to invoke the TE event policy rule [2]. The TE event policy rule is a high level rule containing a number of other rules. 3. The PEP forwards the decision request to the PDP (local, global, or both):

a. The PEP / LPDP evaluates the Load-distribution-action policy rule (this will include a query to the TEDB, via the TEC).

b. If this does not succeed, the Create LSP action policy rule, i.e. create LSP on the server layer, is applied. i. If enough server layer resources are available,

LPDP takes care of the further procedure (see step 4 below).

ii. If enough server layer resources are not available, the decision is delegated to the global PDP [3a], [3b]. This situation is not covered by the use case.

Page 9: GMPLS Control Plane, Policy-based Management, and ...dutta.csc.ncsu.edu/.../student_presentations/GMPLS_Control_plane.pdf · GMPLS Control Plane, ... Ericsson Research, Sweden Torshamnsgatan

Path selection policy rule

LSA update action policy rule

Load-distribution-action policy rule

PEP/LPDP

TECwith path

calculation capability

TE Link

[3a]

[2]

Information dissemination action policy rule

TEDB

TEC

Management Plane

PDP

[1b]

Create LSP action policy rule

TE event policy rule

[1a]

[3b]

[4]

SgC

[5]

[6]

[7]

[8b]

CP or node internal signal or event

MO

Processing unit

CP-MP management interaction

SgC = Signaling Controller TEC = Traffic Engineering Controller LPDP = Local Policy Decision Point PEP = Policy Enforcement Point TE = TE Database

[8a]

Figure 6 Event-driven TE policy action

4. The LPDP evaluates Path computation/selection policy rule. Policies for path computation/selection filter the access to the TEDB. 5. The PEP delegates the enforcement of the policy decision to the TEC [4]. 6. The TEC triggers the Signaling Controller for setup of (server layer) LSP [5]. 7. If this is a success, then TEC invokes the LPDP [6] to:

a. Check the LSA update policy (which parameters should be assigned to resources, i.e. how to handle the resource from a routing point of view (e.g. set color of TE Link, TE bundling aspects, FA-LSP).

b. Evaluate the routing information dissemination policy, and possibly the TEC will initiate LSA update (not shown in figure).

8. The TEC updates the TEDB [7]. The TEC Managed Entity emits a notification to inform about the result of the policy decision enforcement [8a]. 9. The corresponding Managed Entity (TE Link) emits a state change notification to inform the Management System (in practice via MIB update) [8b].

C. Summary and overview: Proposed Policy Actions with example conditions and supporting information. The two use cases provide insight as to what policy rules

and features are involved in GMPLS TE and service provisioning. In this section we propose further identification and structuring of what we consider will be needed policy actions. Example conditions, roles, and supporting information will also be provided according to the proposed structure. We do not intend to do explicit policy information modeling at this stage. Rather, specific policy information modeling is left for further study. However, we provided some directions,

guidelines and principles for policy information modeling in the following section.

The approach we have taken, besides analysis through use cases, has been to identify, analyze, and structure the policy actions needed for GMPLS TE and service provisioning. Furthermore, we let the identified policy actions drive the identification and analysis of associated policy conditions, roles, and the needed information to complete policy conditions or actions. By such a structured analysis we prepare for and make the next task, i.e. the task of policy information modeling, easier to tackle. This is work-in-progress and some of the areas below do not yet have explicit policy actions identify as such.

1) Admission control policies

As stated in Section II, the QPIM model already provides policy actions related to resource admission of packet switched traffic, that is, with focus on IP traffic. What is then needed in addition when considering GMPLS? On the one hand, the ASON inspired separation of call and connection must be taken into consideration. On the other hand, admission control is also concerned with TE routing information advertised and exchanged by means of OSPF-TE and/or LMP (see also routing control policies).

For call admission control we propose the following:

• Call Admission Action The positive condition includes the correct authentication.

This action may involve setting authorization data. The condition may involve other administrative data, typically SLA related, and/or time related, as well as client application type.

Page 10: GMPLS Control Plane, Policy-based Management, and ...dutta.csc.ncsu.edu/.../student_presentations/GMPLS_Control_plane.pdf · GMPLS Control Plane, ... Ericsson Research, Sweden Torshamnsgatan

• Call Reject Action Such an action may be needed to distinguish among several

reject procedures including what to report back to the client, and whether to start renegotiation.

For connection admission control we propose the following:

• Connection (Initiate) Admission Action It is likely that subclasses are needed to distinguish between

the various LSP switching capability classes as well as overlay vs. peer interconnection model. Considering the overlay model, the condition for a successful connection admission is the successful path selection by the ingress node (see below). In addition, several client or node specific conditions may apply, e.g. total number of connections or bandwidth constraints, possibly dependent on time, destination, or service class. See also conditions mentioned in the call/connection use case above.

• Connection Reject Action Such an action may be needed to distinguish among several

reject procedures, including what to report back to the client, and whether to start renegotiation

2) Signaling control policies

Signaling control policies relate to operations involving parameters of GMPLS RSVP signaling and its TE capabilities. The Signaling Controller should behave in accordance with the TEC, and care should be taken not to set signaling control policies that go beyond the scope of the Signaling Controller. Potential actions in this area are:

• Crank-back Action

• Signaling Recovery Action

3) TE (routing) control policies

The dissemination of network state information for routing and TE must be carefully controlled in order to provide sufficient, but not an overwhelming volume of, information to adjacent routing nodes. Typically, this includes or relates to topological information as well as TE information quality. The latter includes per-priority bandwidth related (unreserved, maximum reservable, and maximum LSP bandwidth), metric, resource class, and SRLG information advertisement. Policies provide flexibility to make the routing and (TE) information dissemination adapt to changing conditions. The policy rules must also take into account the different roles of the various routing adjacencies.

• Configure TE Link Action It allows configuring bandwidth related parameters, metric,

resource class, and SRLG information advertisement

• Bundle TE Link Action Creation and aggregation i.e. bundling (local information

on component links kept link local) control for TE Links. It includes the selection of component links during initial provisioning phase and subsequent addition and/or deletion of component links depending on network resource usage.

• Link State Advertisement Action

On the event of “significant” TE Link state change, conditions should be checked to decide whether to advertise the new TE link state. The condition could be dependent on estimations from historical data, time since last advertisement, etc.

• Manage TE Info Action Before advertising TE link state, the information may need

to be filtered, aggregated, or somehow processed. Policy actions for each of these more detailed actions might be needed.

• Create FA Action On the event that an LSP has been created for TE purposes,

a decision must be made whether to make it an FA-LSP, and which parameters should apply to the FA-LSP.

4) Path computation and selection policies For the creation of LSPs, various path computation and

selection policies should be allowed. The exact choice of the rules, and how rules are combined and related to each other, will be a choice for the network operator. Policies are needed to exclude or include resource information located in the TEDB for the path calculation or selection process. We distinguish between policies advising which resource types to choose vs. policies advising which resource instances to choose. The resources (types or instances) can be links, nodes (also loose), or potentially the whole path. The conditions will consider both service classes (including parameters such as delay, jitter, loss, availability, and resilience), as well as service instances, where also specific source and destinations, and bandwidth/throughput requirements can be taken into account. Availability and resilience of the LSP can be considered. Thus, the policies will also include decisions regarding backup path or resources. How the conditions will depend on TE link states will be further considered. Suggested policy actions are:

• Link Type Selection Action

• Path Computation Action

The above Path computation action may also contain path selection directives.

5) Load distribution policies

This is an area where it is important to harmonize between IP/MPLS and GMPLS needs. Support is needed to allow Load distribution policies be triggered by link utilization events. A Load distribution action could be associated with a set of more detailed actions or choices. For instance, the action could be to modify path computation or selection policies, or the action could be to initiate LSP setup.

• Load Distribution Action

6) Traffic mapping policies (MPLS)

As mentioned above, the early work on policies for MPLS considered traffic mapping policies. This includes mapping traffic to Forwarding Equivalence Classes (FECs), and mapping FECs, via traffic trunks to LSPs. However, policies for mapping LSPs to server LSPs or to physical topology should not be limited to MPLS, but rather the scope of GMPLS should be considered. However, this policy area must be

Page 11: GMPLS Control Plane, Policy-based Management, and ...dutta.csc.ncsu.edu/.../student_presentations/GMPLS_Control_plane.pdf · GMPLS Control Plane, ... Ericsson Research, Sweden Torshamnsgatan

harmonized with the area of Path computation and selection policies, as well as Load distribution policies, which deal with mapping client resources onto server resources. Mapping policies relating to resource types, as well as policies for flow or LSP instances, are needed.

7) Recovery related policies

The path computation and selection policies are expected to take backup paths into account. The configuration of such features could be policy based. In addition, there are recovery actions that could be controlled by policies, or rather by downloaded directives based on policies, in order to have sufficient responsiveness. For instance, priorities can be set to direct how to distribute the backup resources among a set of LSPs to be recovered, or to otherwise direct the recovery process. Again, type vs. instance must be considered.

• LSP Recover Action

VI. POLICY INFORMATION MODELLING After identifying requirements for, and preferred types of,

(informal) policies and related information, the next step is to develop formal policy information elements to support policy-based GMPLS TE and service provisioning. The preferred starting point for policy information modeling is the PCIM/e [12][13] and the QPIM [14] policy information models (see Section II). The key question is then which information elements can be used directly, at deployment time, from PCIM/e and QPIM, and which new information elements are needed. The new elements can be introduced either as an update of PCIM/e or QPIM, or they should be introduced in a new standard policy information model dedicated to GMPLS TE and service provisioning. In the following, we identify some general guidelines for policy information modeling. In addition, well-known object-oriented information modeling rules and guidelines also apply. The proposals below are preliminary and will be further investigated, elaborated, and updated.

The general advice from [12] is: “Policy models for application-specific areas may extend the “core policy model” in several ways. The preferred way is to use the PolicyGroup, PolicyRule, and PolicyTimePeriodCondition classes directly, as a foundation for representing and communicating policy information. Then, specific subclasses derived from PolicyCondition and PolicyAction can capture application-specific definitions of conditions and actions of policies.” Furthermore, in [13] it is stated when nesting of policy rules is considered: “Policy rules have a context in which they are executed. The rule engine evaluates and applies the policy rules in the context of the managed resource(s) that are identified by the policy roles (or by an explicit association). Submodels MAY add additional context to policy rules based on rule structure; any such additional context is defined by the semantics of the action classes of the submodel.”

There are several interesting observations that can be made with respect to QPIM. Some of them are: Several QoS related policy actions (classes) are defined where only the “RSVPSimpleAction” is derived from the “SimplePolicyAction”. Traffic profiles, which can be

considered as policy (support) information, are derived directly from “Policy”. An association is defined and ties a traffic profile to an “AdmissionAction”. Several subclasses of the implicitly bound “QoSPolicyRSVPVariable” are defined. It is interesting to note that no explicit variables are defined. These are used in various conditions. However, no QPIM specific condition classes are defined, as the PCIM/e policy condition classes can be used directly. Associations are defined to allow (simple) policy action “parts” be associated with an overall QoS policy action.

Note that QoS policy conditions use the implicit variables as defined in PCIM/e and QPIM. This is the natural modeling choice since a (centralized) QoS PDP does not need to take into account the state of managed resources as represented by explicit managed objects. The conditions model state of network devices that are not modeled directly. However, considering GMPLS TE and service provisioning, things become different. In Section IV we presented main characteristics of a management information model, representing both CP and TP resources as perceived by the CP and presented to the MP. Both node level and network level managed resources will exist, and they can reside centralized as well as distributed with the CP nodes. Policy conditions will often depend on the state of managed entities, and should be represented by explicit variables. One design option would then be the refinement of the PCIMe “PolicyExplicitVariable” into an extensive set of specific explicit variables according to what is proposed in Section V. Whether being a centralized or a distributed (local) GMPLS PDP, such a PDP will use both policy information and management information. Thus, policy and management information must be closely integrated, and information model consistency and must be ensured. Such a holistic modeling perspective was not sufficiently considered in the early policy modeling attempts (see e.g. [20]). Implementation specific conditions may require that such an overall information model must be split into “fragments”. It should be easier to achieve a good result if the starting point is correctly modeled.

In Section V several policy actions are proposed. The next step is to decide which of these needs to be modeled as separate policy action classes, derived from the general PCIM/e policy action class, and which should rather use a generic PCIM/e policy action class directly.

Explicit policy events are not needed in QPIM, as the events are implicit (either time dependent or traffic handling implicit events). PCIM/e, which were heavily influenced by the QPIM work, does not include policy events either. However, GMPLS policies may often depend on events that should be explicitly modeled Often these events naturally belong to managed entities and should be modeled by managed entities. Thus, there is a need to tie managed entity events (or notifications) to policy rules. At the general level this should be modeled as part of the PCIMe by introducing a “PolicyEventInPolicyRule” policy component association. Domain specific policy information models can then refine this further.

In the NOBEL management information model, introduced in Section IV, we have defined several managed object classes,

Page 12: GMPLS Control Plane, Policy-based Management, and ...dutta.csc.ncsu.edu/.../student_presentations/GMPLS_Control_plane.pdf · GMPLS Control Plane, ... Ericsson Research, Sweden Torshamnsgatan

and several of those have actions (operations) defined. Policy rules for GMPLS TE and service provisioning rely upon the state of managed resources, which to a large extent are modeled as managed entities. Given this, it appears reasonable to map “Managed Entity Action” to Policy Rules. The exact modeling construct to support such an approach is for further study. This may also imply proposals to extend PCIM/e.

In QPIM, traffic profiles are derived directly from “Policy”. In the area of GMPLS we foresee that this kind of policy support information also will be used and modeled as part of management information models. One example is Service level specification information, which is needed in various forms for various purposes. Again we see the need for close relationship between policy and other management information. To accommodate for such dependencies, we will consider developing and adding association classes, relating such support information to policy information.

VII. CONCLUDING REMARKS AND FURTHER WORK In this paper we have presented our “first iteration” of

identification and analysis of policies for GMPLS TE and service provisioning. The work was based on analysis of existing material such as the IETF policy framework [4], general policy information models (PCIM/e) [12][13], as well as QoS Policy information model (QPIM) [14]. Early work on PBM requirements and policy information for MPLS was studied. However, we observed that the IETF did not continue these efforts.

Two use cases, Call and connection setup via UNI and Event-driven TE policy action, were analyzed to explore how policies are involved in service provisioning and TE operations. We (informally) propose several Policy actions applicable to GMPLS TE and service provisioning, based on previously identified requirements. Our approach is based on a focus on, and identification of, Policy actions, and to let a well-structured set of Policy actions drive the further identification of related policy and management information. This will guide the formal policy information modeling, which is a next step of the ongoing work.

We have pointed out guidelines for policy information modeling for the support of GMPLS TE and service provisioning. We propose to extend the PCIM/e with Policy Event, to explicitly model triggering of policy rules. This will increase automation and operational efficiency. We also recommend that the relationships between policy and management information and related support information must be explicitly captured and represented in a coherent modeling approach.

We recommend that the topic of GMPLS TE and service provisioning be put on the IETF agenda. In the EU NOBEL 2 project we will pursue the further analysis of this topic and correspondingly develop policy information elements to enable and support PBM for GMPLS networking in the areas of service provisioning, TE and path computation.

ACKNOWLEDGMENT This work was carried out in collaboration with the IST

project NOBEL funded by the European Commission.

REFERENCES

[1] E. Mannie (Editor), Generalized Multi-Protocol Label Switching (GMPLS) Architecture, IETF RFC 3945, 2004.

[2] L. Berger (Editor), Generalized Multi-Protocol Label Switching (GMPLS) Signaling, Resource ReserVation Protocol-Traffic Engineering (RSVP-TE) Extensions, IETF RFC 3473, 2003.

[3] K. Kompella (Editor), Y. Rekhter (Editor), OSPF Extensions in Support of Generalized Multi-Protocol Label Switching, IETF RFC 4203, October 2005.

[4] R. Yavatkar, D. Pendarakis, R. Guerin. “A Framework for Policy-based Admission Control”. IETF RFC 2753. January 2000.

[5] S. Herzog. “RSVP Extensions for Policy Control”. IETF RFC 2750. January 2000.

[6] D. Durham, et.al. “The COPS (Common Open Policy Service) Protocol”. IETF RFC 2748. January 2000

[7] M.Vigoureux, B. Berde, L. Andersson,T. Cinkler, L. Levrau, D. Colle, J. Fdez-Palacios, M.Jager, Multilayer Traffic Engineering for GMPLS-enabled Networks, IEEE Communications Magazine, July 2005.

[8] G. Lehr, U. Hartmer, R. Geerdsen, Design of a Network Level Management Information Model forAutomatically SwitchedTransport Networks, NOMS02, 2002.

[9] J. Strassner, Policy-Based Network Management -Solutions for the Next Generation, Morgan Kaufmann. 2004.

[10] B. Berde, H. Abdelkrim, M. Vigoureux, R. Douville, D. Papadimitriou, Improving Network Performance through Policy-based Management applied to Generalized Multi-Protocol Label Switching, 10th IEEE ISCC 2005 Symposium, June 2005.

[11] M. Brunner, J. Quittek, MPLS Management using Policies, IFIP DSOM’01, 2001.

[12] B. Moore, Policy Core Information Model -Version 1 Specification, IETF RFC 3060, 2001.

[13] B. Moore, Policy Core Information Model (PCIM) Extensions, IETF RFC 3460, 2003.

[14] Y. Snir, Y. Ramberg, J. Strassner, R. Cohen, and B. Moore. “Policy Quality of Service (QoS) Information Model,” RFC 3644, IETF, Nov. 2003.

[15] H. Hazewinkel, Ed. D. Partain, Ed. “The Differentiated Services Configuration MIB”. RFC 3747, IETF, April 2004.

[16] K. Chan, R. Sahita, S. Hahn, K. McCloghrie. “Differentiated Services Quality of Service Policy Information Base”. IETF, March 2003.

[17] A. Westerinen, et.al. “Terminology for Policy-Based Management”. RFC 3198, IETF November 2001.

[18] M.Jaeger, et.al. “Conclusions on Network Management and Control solutions supporting broadband services for all”. D33, EU IST project NOBEL.

[19] K. Isoyama, M.Yoshida, M. Brunner, A. Kind, J. Quittek, PolicyFramework QoS Information Model for MPLS, IETF Draft, draft-isoyamapolicy-mpls-info-model-00.txt. December 2000. (IETF expired)

[20] R. Chadha, Huai-An (Paul) Lin, Policy Information Model for MPLS Traffic Engineering, IETF draft, draft-chadha-policy-mpls-te-00.txt. July 2000. (IETF expired)

[21] S.Wright, F.Reichmeyer, R.Jaeger, M.Gibson. “Policy-Based Load-Balancing in Traffic-Engineered MPLS Networks”. draft-wright-mpls-te-policy-00.txt. June 2000 (IETF expired)