138
GVersion 1.0 Authors: Amar Nemalikanti, Keerti Nayak, Sreedhar Kanchanapalli, Yashu Vyas SAP NetWeaver Process Integration Playbook

212458274 SAP Integration Playbook V1

Embed Size (px)

Citation preview

Page 1: 212458274 SAP Integration Playbook V1

GVersion 1.0

Authors: Amar Nemalikanti, Keerti Nayak,

Sreedhar Kanchanapalli, Yashu Vyas

SAP NetWeaverProcess Integration Playbook

Page 2: 212458274 SAP Integration Playbook V1

Contents

Planning and assessment 1Introduction 1

Integration history 2

Integration challenges 3

What is an Enterprise Service Bus (ESB)? 4

ESB key features/characteristics 4

The case for an ESB 6

Selecting an ESB 7

Understanding ESB functionality is an essential step in selection 7

Positioning SAP NetWeaver Process Integration (PI) as the ESB 8

How can SAP NetWeaver PI as ESB help address the above integration challenge? 9

Integration strategy 15

Integration architecture strategy 15

Communications middleware 18

Application and technical connectivity 22

Application connectivity options 22

Integration technologies options 23

Effort estimation and staffing plan 24

Project estimation tool 24

Staffing/resource planning model 24

Staffing plan 24

Envisioning 26Interface/scenario-related decision factors 26

New PI 7.3 feature and function usage 26

Number of connected business systems 27

Number of live interfaces 27

Type of high-volume interfaces 28

Long-running ccBPM processes 28

Implementation 29Design — Best practices (Interface Strategy, Error Handling Strategy…, etc.) 29

Purpose 29

EAI Playbook

Page 3: 212458274 SAP Integration Playbook V1

Audience 29

Assumptions and risks 29

Scope 29

Executive summary 29

Integration platform 29

Integration patterns and key considerations 30

Integration strategy for SAP systems 30

Design — Error handling strategy 34

Purpose 34

Scope 34

Executive summary 34

Error notification 34

Message reprocessing 34

Definitions 35

Error classification 35

POF trigger 36

POF connectivity 36

POF transformation 37

POF process 43

Design — Archiving strategy 49

Purpose 49

Scope 49

Audience 49

PI engine archiving 49

Archiving 58

Different archiving/delete-related reports at a glance 60

Design — PI application transport strategy 61

Purpose 61

Scope 61

Audience 61

Transport method used: CTS+ 61

Transport naming convention 64

Design — Middleware/system sizing process 64

Development — Development standards and naming convention 64

Purpose 64

Scope 65

Audience 65

Guiding principles and best practices 65

EAI Playbook

Page 4: 212458274 SAP Integration Playbook V1

SLD 65

ESR 68

ID 73

Summary 76

Development — Templates 81

Development — Checklist 81

Code review checklist 81

Run procedure 84

Development checklist 85

Development readiness checks 87

Development status and templates 87

Production cutover 87

Plan 87

Strategy 88

Continues Evolution 98Interface monitoring approach and strategy 98

Introduction 98

Available from 98

Time... is now!! 98

Customers’ view 99

Defect and change management 101

Standard CR process for development/enhancements 102

Handling bug fixing, issues, and user changes 103

Urgent changes 104

Outage processes 105

Planned outage processes 105

Unplanned Outage Processes 106

Best practices to avoid unplanned downtime 107

Release management 109

Purpose 109

Introductory notes 109

Integration 109

Features 110

EAI SW upgrade/update processes 110

Decision factors 110

Usage type 112

PAM compliance 112

EAI Playbook

Page 5: 212458274 SAP Integration Playbook V1

Downtime requirements 113

IT/operations-related decision factors 114

High-availability setup 114

Auditing requirement 114

Basic and operations customizing 115

Third-party adapters and content 115

EAI Playbook

Page 6: 212458274 SAP Integration Playbook V1

Continues Evolution

Planning and assessment

Introduction

Most enterprises run and maintain hundreds of applications supporting their business needs. The applications can be in the form of packaged applications or custom-built solutions. These could be built in house or acquired from third parties, or have been part of the original legacy systems or come in due to mergers and acquisitions or a combination of these.

It is sometimes surprising how organizations get these landscapes into a state of disarray as far as their information technology (IT) systems are concerned. Further analysis makes us believe that there are reasons why such varied systems grew and prospered.

Most solutions available today came up because they were the best-of-the-breed solutions available at a point in time for a business function when they were really wanted. When decisions were made to buy or make these legacy solutions (be it in COBOL or other ancient languages), it was still considered ‘advanced.’

Also, some solutions are specialized in a particular area, such as billing, human resources (HR), supply chain, manufacturing, retail, distribution, hospital, etc. More often than not, they provided solutions at minimum Total Cost of Ownership and required very little customization.

Many heavyweight applications of today like SAP and Oracle do provide a lot of functionality required to run today’s business. However, these only cater to a fraction of the clients’ requirements. Even in the complex business environment that we see today, many enterprises still use applications that are above and beyond the scope of these heavyweights.

With the world coming closer, users no longer want to limit their business to smaller silos within the organization. Enterprises have started realizing that the potential of an organization can only be optimally realized when all the divisions within the organization are accessible without boundaries. Limiting such an interaction due to technology-related limitations is unacceptable to the business. Technology is there to grow business and not to limit them. Just to give an example of business processes that span multiple organizations, an assembly unit may want to check inventory, compare the inventory to forecasted demand, request quotation, order parts, receive shipment, verify quality, and fulfill an invoice. All these functions may be carried out by more than one system and may involve interactions between them. In order to support these business processes across heterogeneous landscape, the applications need to be integrated. Application integration needs to provide efficient, reliable, and secure data exchange between multiple enterprise applications.

EAI Playbook 1

Page 7: 212458274 SAP Integration Playbook V1

Continues Evolution

Integration history

A vanilla integration, which provides integration between two applications, is very easy to conceptualize. A hardwired program in one of the applications that convert data from Point A to Point B using the custom proprietary application format will do the work. The only challenge here is that someone needs to be aware of the application A and application B. Finding such people is difficult, yet not impossible.

However, this challenge extrapolates itself as we have more and more applications within the integration landscape. Gradually organizations end up having what is called as spiderweb epitome. This is difficult to maintain and extremely expensive too.

An alternate and highly effective solution to meet the integration challenges and yet avoid this spiderweb is to use an Enterprise Application Integrator (EAI), such as SAP process integration (PI), Tibco, WebMethods, SeeBeyond, Cordys, etc. These EAI tools have the capability to communicate to any systems and also within themselves. Thus was earlier a spiderweb is now a hub and spoke. All the applications talk to each other via the EAI tool. EAI does the translation necessary from any two specific applications within the network. This leads to simplification of the landscape and also a robust integration between heterogonous applications running on different platforms without the need for replacing any existing system.

EAI Playbook 2

Application A Application BCoversion Program A - B

Page 8: 212458274 SAP Integration Playbook V1

Continues Evolution

Integration challenges

However simplified enterprise integration can be, it still has its own set of challenges.

1. Lot of businesses use EAI solutions for their day-to-day operation. Once these processes are set up, they are expected to be up and running 24x7x365 days. Any downtime or misrouting can cause the business to virtually stop.

2. The skills required to maintain an EAI solution are quite daunting. Troubleshooting complex problems requires a combination of skill sets, which are generally spread across individuals. Hence, when staffing a development or maintenance project, the individual’s needs are to be shortlisted based on various parameters and expect a strong collaboration.

3. Lack of standards is another major challenge. Though many standards have evolved and adapted across multiple EAI tools, not all EAI tools agree or work on the same standards. Some tools claim on adapting the standards, but are not 100% compliant. Added to this, the extensions to the standards provide means of creeping up customization into standards.

4. Most of the EAI tools use eXtensible Markup Language XML for exchanging information. XML is a free-flowing language where the tags can be marked up as and when needed. XML is a standard that has no standard. Hence, when we talk about interoperability, one still needs to resolve the semantic differences between these XML standards. This is a time-consuming task that requires significant technical and business decisions.

5. EAI requires a shift from the tradition of working in silos. Once a department has agreed to be part of a process flow spanning multiple departments, the business function that belongs to one department now becomes a shared responsibility. This removes the exclusivity of the department on the functionality. Any upgrades or changes in the functionality now need to be done with consultation from other departments.

EAI Playbook 3

Application A Application B

Application C Application D

Application EEAI

Page 9: 212458274 SAP Integration Playbook V1

Continues Evolution

What is an Enterprise Service Bus (ESB)?

An ESB is a communication and mediation layer that connects service consumers and service providers in service-oriented architecture (SOA) scenarios and in situations that mix SOA and other architecture styles. In other words, an ESB is an intermediary layer of middleware through which a set of reusable business services are made widely available. It typically is designed to be the backbone of a modern SOA, routing requests between service consumers and service providers, both synchronously and asynchronously, and may be configured to perform a variety of actions, such as routing, translation, protocol conversion, or authentication in between.

Essentially, it accepts service consumer requests, forwards them to the appropriate service providers, and returns whatever response may be generated to the requestor. To do this in an enterprise environment, an ESB includes availability and scalability features, as well as the ability to locate all of the connected service providers. An ESB also handles multiple protocols, with Web services being the most discussed. An ESB can support both synchronous and asynchronous requests, typically including or using store-and-forward messaging as part of the solution.

The real power of an ESB, though, is what it can do in between the consumer and provider, including data-dependent routing, data translation, protocol conversion, security features, and support for many protocols and interfaces.

ESB key features/characteristics

The core functions, which provide the basic operational capabilities of the ESB, are as follows:

Support of multiple protocols. An ESB typically supports a wide range of Web services, Representational state transfer (REST) and other protocols, both to provide a range of capabilities for newly developed business needs and to support integration with a wide range of third-party legacy systems and services.

Protocol conversion. Just as important as an ESB’s support of multiple protocols is its ability to accept a request in one protocol and forward it as a request using a different protocol, a capability that simplifies using the ESB with multiple new legacy systems.

Data transformation and data-based routing. ESBs typically have the ability to translate data from one format to another, possibly using that data to enrich data streams and make routing decisions along the way.

Support of multiple connectivity options. ESBs provide the means to connect to the databases, messaging systems, management tools, and other infrastructure components that are part of an organization’s existing infrastructure.

Support of composite services through lightweight orchestration. Lightweight orchestration, commonly called “flows” or “itineraries” by ESB vendors, is generally stateless and short-lived, though neither is a requirement. It means connecting multiple services together into a larger composite service, with the ESB managing the flow of control and information among the component services. The term “lightweight” stands in contrast to Business Programming Execution Language- (BPEL-) based process orchestration.

Support of multiple standard business file formats. Many vertical industries have defined file formats, the most recent being XML based. ESBs typically provide the ability to work directly with these formats.

Integrated security features. ESBs provide integration with security directories and Operating System (OS) security features to support authentication and authorization, simplifying the challenge of making services available to multiple user communities (for example, employees, customers, and agents).

EAI Playbook 4

Page 10: 212458274 SAP Integration Playbook V1

Continues Evolution

A comprehensive error-handling mechanism. ESBs provide uniform mechanisms for identifying, managing, and monitoring both technical and business errors, with the ability to customize specific error behavior as needed.

Support of both synchronous and asynchronous operations. ESBs must support requests and operations of both the synchronous and asynchronous variety, making it easy to use each where appropriate, since all modern organizations will have business activities that fall into both categories.

Highly available and scalable infrastructure. ESBs can use software and/or hardware clustering and other mechanisms to provide high availability. Every ESB has the ability to support horizontal scalability and to span a large infrastructure; some also provide facilities to support vertical scalability of individual services.

Extensibility. While it is important for ESBs to support a broad range of individual capabilities in each area, it is perhaps even more important for an ESB to make it possible for customers to add capabilities themselves. Your ESB does not support a particular Web services protocol? Add it. You need to talk to an aging legacy system using a homegrown messaging system? Add support for that too. The good news is that all of the ESBs provide some means to make these kinds of extensions, indistinguishable from the out-of-the-box options.

Graphical editing tools. Graphical editors for ESB flows (“itineraries” or “lightweight orchestrations”) make it much easier for architects and developers to work with ESB activities over time, especially in the case of maintenance activities that occur long after flows’ original development. Without graphical editing tools, developers must work directly with the ESB’s proprietary XML-based programming language, a task that requires both tedious hand coding of XML and in-depth knowledge of the ESB’s operation.

Service level agreement (SLA) monitoring and management. Most ESBs have some provision for controlling throttling and load balancing to meet defined SLAs on a per-endpoint basis. ESBs cannot yet replace SOA service management products that provide endpoint security and management, though the trend is moving toward ESBs providing more and more of these capabilities. Most ESBs have some provision for controlling throttling and load balancing to meet defined SLAs on a per-endpoint basis.

BPEL and other business process support. Design, simulation, and execution of business processes using BPEL and its cousins remain primarily the domain of business process management (BPM) suites, but ESBs are supporting a growing range of these capabilities. Most products have the ability to create, execute, and manage BPEL orchestrations; some also offer process-simulation capabilities.

Business activity monitoring (BAM). BAM allows customers to define business-centric metrics called key performance indicators (KPIs) and to present those KPIs in near real time using dashboards. BAM also generates alerts to notify businesspeople of potential operational problems when these KPIs cross specified thresholds. Some ESBs provide comprehensive BAM capabilities; others rely on third-party BAM products.

Service life cycle management. Most ESBs include at least some life cycle management features. ESB vendors that have independent application life cycle management solutions (IBM, Oracle, and Software AG) naturally promote use of these products with their ESBs.

Dynamic service provisioning. Most ESBs can have the ability to dynamically provision new ESB operations, which means that users can add or modify flows without having to restart ESB components. ESBs that can host services themselves can also have the ability to dynamically provision those services. Innovations in this space include the ability to dynamically control the number of service instances running to meet SLA targets.

Complex event processing (CEP). CEP is growing, and as ESBs are both conduits for and sources of events, they are natural components of CEP applications. Vendors are adding prebuilt integrations for their own and third-party CEP engines.

EAI Playbook 5

Page 11: 212458274 SAP Integration Playbook V1

Continues Evolution

A business rules engine (BRE). None of the ESBs include an embedded BRE, but several offer prepackaged integration with a third-party product. Some provide a plug-in BRE themselves; others require separate licensing of a third-party product.

The case for an ESB

An ESB is a core component of a mature SOA platform. It is also a sophisticated set of middleware components, which means it will include hardware costs, software licensing and support costs, training costs, and the time it takes to do the necessary design work. An ESB begins to make sense when the benefits exceed the complexity of implementing it. If the customer has straightforward needs with a limited number of services, a simple Hypertext Transfer Protocol (HTTP) infrastructure and matching SOA governance processes might be sufficient, and going for an ESB may not be justifiable, considering the implementation effort and cost associated with it. An ESB really starts to make sense, though, when customers need:

More robust service management than a simple infrastructure provides. Customers have large numbers of services, a need for extensive versioning, and a need for robust reporting on service utilization statistics.

Complex security policies applied to services. ESBs provide request authentication and authorization features, and also provide the means to integrate with a variety of security subsystems directly.

Both synchronous and asynchronous service invocation. If customers have a lot of messaging or electronic events, there is a need to have asynchronous services alongside the synchronous services. Asynchronous service invocation can be used for simple notification or other interaction models, such as publish and subscribe.

Routing of service requests to specific service provider instances. If customers have multiple special-purpose instances of a service, they will likely want an ESB to perform routing of individual requests to the appropriate instance. Customers may, for example, have specific clients whose data is hosted on specific hardware.

Different protocols for requestors and providers. Customers may have services available through messaging or other mechanisms, but want to make these services available to consumers via Web services. This could be because customers have legacy systems.

Transformation of data in some of your requests and responses. Customers may have services that need only specific data or that require data to be represented slightly differently.

A variety of different service infrastructure components. Customers may have security, XML acceleration, or other appliances that need to work in concert with their services, or customers may have Web services infrastructure components from multiple vendors that you need to integrate.

A growing number of composite services. Commercial ESB products include some level of Business Programming Execution Language (BPEL) or other orchestration, making it possible to create composite services within the ESB itself. Such composite services can also be used for process automation, in which several existing services are invoked to create a larger process.

Publish/subscribe. Customers may have expectations of implementing a loosely coupled messaging pattern like publish/subscribe where publishers and subscribers are allowed to remain ignorant of system topology and can continue to operate normally regardless of the other as compared to a traditional tightly coupled client–server paradigm. An ESB allows publishers post messages to an intermediary message broker and subscribers register subscriptions with that broker, letting the broker perform the filtering. The broker normally performs a store-and-forward function to route messages from publishers to subscribers.

Any one of the above needs may not necessarily demand an ESB — in most cases, there are other approaches that are viable as well. From a practical standpoint, though, customers will likely have more than one of these needs and that will tip the scales in favor of an ESB.

EAI Playbook 6

Page 12: 212458274 SAP Integration Playbook V1

Continues Evolution

Selecting an ESB

Understanding the function of an ESB and its role and fit in your particular environment is essential to making a sound selection. Before making a decision to go with a particular ESB, it is good to look at the below points:

A Java 2 Enterprise Edition (J2EE) application server. Some ESBs are themselves JEE applications that, therefore, run on top of an application server. Some such ESBs require a specific application server, while others will work with any vendor’s application server. Most of these ESBs use the high-availability, scalability, and Java Message Service (JMS) features that the application server provides. Organizations with significant JEE installations will find this to be a benefit since they are accustomed to this administration and value the consistency in the underlying software infrastructure. Other organizations will see this application server as an additional layer to understand and manage.

Clustered hardware. Some ESBs achieve high availability through hardware clustering. If this is the case for your ESB, you will have fewer choices in hardware as well as the additional cost of clustering hardware, software, and administration. These solutions will make more sense in environments where there is already a lot of hardware clustering in place for other solutions.

Database or clustered database. For any operation requiring the maintenance of state or configuration information, some kind of persistent data store is required. Some ESBs do this using their own file formats, some use features of an underlying JEE application server, and others require a relational database to fill this need. Cases requiring a relational database may need to use hardware clustering to achieve high availability. In some environments, this is no big deal; in others, it adds significant cost and complexity.

Development tooling. Your integration and composite service developers will spend much of their time working with an ESB’s developer tooling. The user experience varies widely, ranging from working directly with XML files and command-line scripts to working in integrated development environments, very often based on Eclipse, which let developers work with graphical models of event flows. You will need to evaluate your integration community’s skills and preferences to determine the best fit for your environment.

Monitoring and support. Some ESBs provide extensive support for monitoring and support, including the ability to view and manage in-progress business activities within the ESB (both BPEL and shorter-running message flows). Other vendors take a different approach, either providing these capabilities in separate BAM and system management environments or expecting that customers will already have those capabilities in house. Organizations without extensive support infrastructure in place must give more serious consideration to solutions that include these features.

Out-of-the-box features. Some ESBs provide extensive out-of-the-box features, including a services registry, and ships with an extensive set of technology and application adapters. Few provide an alert management system for error reporting and Application Programming Interfaces (APIs) for customer adapter development and configuration. It is good to choose an ESB that has better out-of-the-box features.

Understanding ESB functionality is an essential step in selection

An ESB’s job is to give service consumers access to services, regardless of location, protocol, transport, interface technology, security domains, syntactical mismatches, and semantic differences. This means that ESBs must support many interface and transport protocols and data formats, and it also means they must provide conversions between interface protocols, between transport protocols, and between data formats. ESBs also route requests based on data; support request/response, notification, publish and subscribe, and other interaction styles; and integrate with a range of commercial directories and security models.

The real power of an ESB comes from its ability to chain operations together to create individual “message flows” (sometimes called “itineraries,” “lightweight orchestration,” or just “flows”). Developers can string

EAI Playbook 7

Page 13: 212458274 SAP Integration Playbook V1

Continues Evolution

services together, the results of one service providing the input to the next, without having to build the flow logic into the services themselves.

Positioning SAP NetWeaver Process Integration (PI) as the ESB

What are the typical integration challenges in an enterprise? Integration challenges are often the foremost obstacle to getting the full value from packaged enterprise resource planning (ERP) solutions. Integration with these systems has frequently led to significant complications because core ERP applications are often better at collecting information than at making it available in all the places it is required. Typically, we see the below integration challenges in most of the customer IT landscape.

Limited interoperability: In most cases, ERP applications can handle integration with other packaged applications, such as customer relationship management (CRM) or human resource management (HRM), from the same vendor, but reaching out beyond the suite is often problematic. Connections to applications from other vendors, homegrown applications, and other legacy assets typically require the use or creation of custom interfaces.

Evolution of application platforms and infrastructure: With the evolution of application platforms, the application suites built on them, and the integration infrastructure designed to bind them — SOA, followed by BPM, events, the cloud, and now elastic application platforms — along with adding capability and flexibility to the enterprise, also drives up integration complexity.

Processes that run across value chains: In some highly collaborative value chains, such as pharmaceuticals or consumer-packaged goods, shared processes are the norm. Enterprises in these and other sectors with similar needs have to contend with the fact that an increasing number of critical processes are not contained in the four walls of a single enterprise; they must be able to cope with standards-based exchanges — Electronic data interchange (EDI) and other business-to-business (B2B) data exchanges and standards, such as RosettaNet, Chemical Industry Data Exchange(CIDX), Petroleum Industry Data Exchange (PIDX), and Association for Cooperative Operations Research and Development (ACORD.

New realities: There are a growing number of newer, ad hoc exchanges that enterprises must accommodate. These range from custom XML formats between two or more partners, to Microsoft Excel spreadsheets, to a wide range of flat files. Enterprises must be able to easily move this information into and out of their ERP systems while maintaining adequate levels of control.

Interoperability with the cloud: There is a growing availability of software as a service- (SaaS-) based integration software in the market, rapidly expanding alternatives for customers. SaaS-based integration has significantly different implementation and operational attributes compared with on-premise integration, but when the economics are right, its greater flexibility and speed of implementation make it a good choice for meeting enterprise integration needs. Also, many enterprises could benefit from the flexible acquisition of additional resources that cloud-based computing enables.

Business expectation of application flexibility: The business will no longer put up with rigid applications that are too slow to respond to business change. Most major application vendors, such as Oracle and SAP, have spent the past several years updating their application platforms and application architecture to enable much greater flexibility, using approaches such as SOA, BPM, and event technology. Implementing and integrating these dynamic business applications often requires IT to configure many more aspects of application behavior, business process flow, and user experience, which further increases the complexity of the integration challenge.

Much closer business involvement: Application architectures of the past provided little or no opportunity for the business to control applications’ behavior; even trivial changes required a Change Request (CR) and developer involvement. Developers are still involved, but they now share responsibility with the business for modeling processes and business rules, and in some cases, these application elements remain under the control of business “super users” even after the application goes into

EAI Playbook 8

Page 14: 212458274 SAP Integration Playbook V1

Continues Evolution

production. This in turn adds risk and complexity that IT must manage while giving the business the greater level of control it wants.

Much broader application access: During the earlier days of ERP integration, the range of choices within the context of user experience was limited, often including only Microsoft Windows clients and Web browsers. Today’s applications are exposed through a much wider range of interaction channels, including not only desktops and Web, but also employee- and consumer-facing mobile apps, retail point of sale, fully instrumented real-time supply chains and warehouses, and Web services interfaces that expose application capabilities to the extended enterprise. Delivering application capabilities through all these channels requires a cohesive approach to multichannel and cross-channel support, which generates a completely new range of integration requirements.

ERP app integration with other apps from the same vendor: In today’s enterprise, there is a need to connect ERP system with complementary apps, such as CRM, SAP Supplier Relationship Management (SRM), SAP Supply Chain Management (SCM), or HRM solutions and these complementary apps could be from the same vendor.

ERP app integration with legacy apps or apps from another vendor: This is often more challenging, as ERP-vendor-supplied integration tools vary widely in their ability to support integration outside their own application suite.

ERP app integration with the outside world: This is another area with a gap between some ERP vendors’ capabilities and their customers’ growing need for B2B integration. SAP provides effective EDI-based interactions using third-party integration vendors (with a strong focus on Seeburger) for years. SAP has also recently formed a partnership with Crossgate to provide enhanced B2B integration features.

How can SAP NetWeaver PI as ESB help address the above integration challenge?

When the client’s overall IT application strategy is based on SAP (i.e., when SAP provides a substantial proportion of the overall application portfolio in the client landscape), this already means that the client has chosen SAP to be the strategic vendor for running its CORE business processes. In such cases, it is definitely worthwhile to consider the integration middleware/ESB solution from SAP as the core integration platform for the client landscape.

EAI Playbook 9

Page 15: 212458274 SAP Integration Playbook V1

Continues Evolution

A major and unique advantage to having both the ESB and ERP software from SAP is the availability of SAP PI content. SAP provides PI content for SAP business suite applications, for SAP industry solutions like Retail, Oil and Gas, and Consumer Products, and for SAP NetWeaver applications like Master Data Management (MDM). This PI content reduces the development time considerably and avoids building interfaces from scratch, which will be the case if the client decides to go with third-party ESB vendors.

SAP NetWeaver PI has been in the market since 2004 and has accumulated a good installed base comparable to that of the leading integration middleware vendors. Most customers deployed the product to integrate SAP applications with other vendors’ packages or custom applications and few also used the product for B2B integration by combining NetWeaver PI with third-party technology (from Seeburger) or on-demand services (SAP Information Interchange from Crossgate) that are both resold by SAP. From Q2-2012, SAP is providing native add-ons for B2B support with additional licensing.

SAP NetWeaver PI has proven that despite its SAP-centric design and technology foundation, it can be utilized to support SAP-to-non-SAP integration scenarios. Most clients use NetWeaver PI to support batch-oriented requirements, but the product has also been successfully deployed to support real-time or near-real-time integration requirements.

SAP NetWeaver PI as a Message Oriented Middleware (MOM) provides reliable transport and assumes responsibility for delivering the message to the proper application. The sender application just puts the message into the queue and leaves the responsibility of delivering it to the MOM. If the receiver application is not running, the message is left in the queue until the receiver will be up again and will fetch it. PI quality of service describes mechanisms relating to the message transport. These include:

– Guaranteed delivery of messages within PI

– Check for duplicate messages

– Delivery in the correct sequence (serialization)

SAP PI quality of services available are:

– Best Effort: The message is sent to the receiver without guaranteed delivery, check for

duplicate messages, or serialization mechanisms.

– Exactly Once: The message is sent to the receiver exactly once. There is no serialization. All

components that are involved in the message processing (including modules and the Java Connectivity Architecture (JCA) adapter) must guarantee delivery of the message exactly once by the correct transactional procedure.

– Exactly Once in Order: This mode is an extension of the Exactly Once mode. In addition to

the above-specified characteristics, serialization takes place on the sender’s side. All components must guarantee that messages are delivered exactly once and in the same sequence that they were sent. Serialization is achieved using a queue that is identified in the serialization context.

SAP PI enables loose coupling, meaning, it decouples the client (application needing an access to the service) from the service provider. It relieves the client from the need to know who is providing the service, because SAP PI is now responsible for creating a communication channel between the client and the service providers. The application does not need to have the integration code, because it will no longer be responsible for creating the connections/reconnecting in case of a communication error/knowing connection information of the service provider. Thus, introducing SAP PI into the landscape will simplify the design of the client.

SAP PI provides service location transparency. In a tightly coupled system, any change of the connection data in one of the service providers also requires a change in all of the clients that are using this service provider. This is no longer true when you use PI, because it is PI now that is responsible for storing that information. To sum it up, SAP PI provides service location transparency, as the client no longer has to know the exact location of the service provider — it now becomes the responsibility of SAP PI.

EAI Playbook 10

Page 16: 212458274 SAP Integration Playbook V1

Continues Evolution

SAP PI enables sharing of services. Once implemented, a service might be used in many projects or by clients (applications) from various departments. For example, a service providing information about the employee might be used by the systems from the HR, Finance, or IT departments.

SAP PI provides a clear separation of responsibilities in the customer landscape. Companies typically are run by business rules and the responsibility of the business is to know how to run a company — what the company’s input and output are and what services to provide. On the other hand, the responsibility of an ESB is to know what the Internet protocol (IP) address and Transmission Control Protocol (TCP) port of the application server are, what the protocol used by particular system is, what the signature of a particular method is, etc. If one day, one particular implementation service will be replaced by a new one, the way the company does business will not change, and we cannot expect or enforce any modification on the company’s business partner or customer side, only the ESB will have to be updated. This enables to decouple the business model — the way the company works — from the implementation.

B2B integration needs. Enterprises have a choice of maintaining their own B2B infrastructure or relying on private exchanges for B2B integration needs. Larger enterprises typically tend to maintain their own B2B integration capability as an alternative for high-volume transactions that would be more expensive using the subscription pricing that private exchange models employ, and so enterprises today need products that typically support a wide range of B2B transport protocols such as File Transfer Protocol (FTP), Secure File Transfer Protocol (SFTP), Application Transport 1 (AS1), Application Transport 2 (AS2), RosettaNet Implementation Framework (RNIF), X.25, and others as well as a variety of B2B data formats such as X12, EDIFACT, ODETTE, TRADACOMS, PIDX, CIDX, and RosettaNet. SAP NetWeaver PI with its strong B2B integration capability could be a natural fit for enterprises with B2B needs.

When formulating ERP integration strategy, larger organizations may find it necessary to employ more than one type of integration solution in combination; however, it is better to solve all the problems with one solution wherever possible. SAP customers with obsolete or close-to-end-of-life integration platforms (e.g., Oracle-Sun Microsystems eGate or IBM/CrossWorlds WebSphere InterChange Server) should consider SAP NetWeaver PI as a potential candidate for replacing the incumbent technology.

For SAP-to-SAP integration, PI license comes included with the NetWeaver suite license and MySAP license. Customers have to pay separately for SAP-to-Non-SAP integration only, and pricing is based on the overall processed message volume in Gigabytes per month with special discounts available for large SAP customers, SAP’s pricing model typically is “more you use, less you pay.”

SAP PI can easily replace Business Connector (BC) and can be a perfect fit for SAP customers using BC to make a staggered move for PI deployment. SAP PI is the central point for all NetWeaver components and is the integral part of SAP’s Enterprise Service Oriented Architecture (ESOA) strategy, so customers who heavily invested on NetWeaver and ESOA gained a lot by having PI as the ONE integration platform

in their landscape.

When we compare SAP NetWeaver PI against a classic ESB reference architecture model, we could see that PI definitely has evolved to provide support for all layers of ESB reference architecture.

EAI Playbook 11

Page 17: 212458274 SAP Integration Playbook V1

Continues Evolution

The ESB Reference Architecture Model

ESB layer ESB component SAP NetWeaver PI features

Architecture Availability SAP PI provides the ability to use clustering to support high availability and scalability across environments that scale horizontally and/or vertically.

Federation SAP PI supports federated deployments. Customers may choose to go for a distributed (federated) PI landscape due to several reasons, including:

Separation of A2A and B2B processes

Separated company divisions with majority communication/integration needs residing within the same division

Network constraints/downtime constraints

Geographically distributed organization with majority communication concentrated locally

One central PI for global processes (HR, Finance, and master data) and decentral PIs for divisions

Topology ESBs can be distributed or brokered. SAP PI is a brokered ESB that relies on a “hub-and-spoke” topology. Both options, either distributed or brokered, provide an adequate level of ESB functionality.

Extensibility SAP PI makes it possible for customers to add infrastructure capabilities themselves, a means to extend core functionality provided by PI.

Connection Messaging SAP PI provides support for both synchronous and asynchronous messaging, a wide variety of communication protocols including Web services, Simple Object Access Protocol (SOAP), and JMS. It also provides protocol conversion capabilities to enable the exchange of information between incompatible protocols.

Routing SAP PI provides the ability to support both predefined and dynamic routing (based on interpretation of message content).

Connectivity SAP PI provides the means to connect to the databases, messaging systems, management tools, and other infrastructure components that are part of an organization’s existing infrastructure.

Mediation Dynamic provisioning

SAP PI can dynamically provision new ESB operations, enabling users to add or modify flows without having to restart PI components.

Policy meta model SAP PI supports policies to govern not only routing but also filtering or augmentation of messages.

Registry SAP PI provides an Universal Description, Discovery, and Integration- (UDDI-) compliant registry to locate available enterprise services/SOA assets.

Transformation and mapping

Transformation capabilities for SAP PI have improved significantly over the past several years. Capabilities range from simply mapping data formats of message

EAI Playbook 12

Page 18: 212458274 SAP Integration Playbook V1

Continues Evolution

ESB layer ESB component SAP NetWeaver PI features

payloads to providing rich aggregation or decomposition of service semantics so that each side of the conversation is unaware of the other side’s details.

Transaction management

SAP PI adapters can be configured to ensure transaction handling in the applications they interface to. E.g., SAP PI Java Database Connectivity (JDBC) adapter can be configured to do a ROLLBACK of all the STATEMENTs executed against a database, if one of the STATEMENT nodes failed to execute.

SLA management SAP PI provides some basic capability to monitor adherence to SLAs, e.g., ccBPM process control steps can be configured to raise exceptions or alerts when expected interface execution flow is NOT met. Also, for interfaces that require priority processing to adhere to SLAs.

Orchestration Lightweight orchestration

Lightweight orchestration means connecting multiple services together into a larger composite service, with the ESB (PI) managing the flow of control and information among the component services.

BPEL support PI has the capability to create, execute, and manage BPEL orchestrations.

Change and control

Design tooling SAP PI provides for easy administration of the product, graphical editing capability, and the ability to support service creation and deployment.

Life cycle management

SAP PI supports unified life cycle management capabilities, which are further enhanced in the upcoming 7.3 release. Few are listed below:

Centralized monitoring environment

PI monitoring Good Morning page in SAP solution manager

PI scenarios visible within SAP solution manager (for documentation)

Optional additional message persistence on Advanced Adapter Engine (AAE)

Enhanced logging on AS Java

Flexible upgrade paths provided for all releases, including:

SAP eXchange Infrastructure (XI) 3.0

SAP NetWeaver PI 7.0 incl. Enhancement Pack (EHP) 1 and EHP 2

SAP NetWeaver PI 7.1 incl. EHP 1

SAP NetWeaver PI 7.3 incl. EHP 1

Enhanced transport management features, including CTS+ apart from file-based export-import option.

Security SAP PI provides security mechanisms, including:

Message-level security (digitally signed or encrypted documents exchanged between systems or business partners): SAP NetWeaver PI offers message-level security for the XI protocol, for the RosettaNet protocol, for the CIDX protocol, and for the SOAP and Mail adapters. Message-level security relies on public and private X.509 certificates maintained in the AS Java keystore, where each certificate is identified by its alias name and the keystore view where it is stored. The message security settings are available in the corresponding collaboration agreement.

Transport-level security: PI supports transport security for HTTP, RFC, and FTP protocols. HTTP connections can be secured using SSL, RFC connections can be secured using Secure Network Communication (SNC), and FTP connections can be secured using FTPS. The various authentication methods supported include user/password, client certificate, SAP assertion ticket, SAML assertion, X.509 authentication token, etc.

Role-based authorization: SAP PI supports defining detailed authorization that can restrict access to Enterprise Service Repository (ESR) and Integration Directory (ID) objects according to a role-based authorization model. The access authorization can be defined at the object-type level where we can specify each access action, either individually as create, modify, or delete for each object type, or as an overall access granting all three access actions.

Access control list- (ACL-) based authorization: SAP PI supports defining authorizations based on ACLs to 1) ESR and ID objects, 2) service users, and 3) SNCs.

Technical monitoring SAP PI provides tools to investigate problems, find root causes, and take action to correct the issues that are discovered. The various tools include:

SAP solution manager for total IT landscape management: Allows for central, IT role-based administration and monitoring across the whole SAP landscape using work centers.

SAP NetWeaver Administrator for PI-landscaped monitoring: Allows for central monitoring of multiple PI domains, and provides a message status

EAI Playbook 13

Page 19: 212458274 SAP Integration Playbook V1

Continues Evolution

ESB layer ESB component SAP NetWeaver PI features

overview for all messages processed in all PI domains in aggregated view and with drilldown capability.

User-defined message search capability: Allows searching for messages using business-relevant message payload content as search criteria (a separate Text Retrieval and Information Extraction (TREX) installation is NOT needed).

SAP NetWeaver PI provides a broad range of capabilities to address enterprise integration challenges. Few capabilities are below:

Message transformation — ability to transform structure and format of business request to a service provider. PI as an ESB has the capability to change the message format to the format accepted by the service provider.

Message routing — ability to send a request to a particular service provider basing on some criteria. PI as an ESB has the capability to decide to which service providers a particular message should be delivered.

Security — ability to protect an ESB from unauthorized access. PI as an ESB assures that only selected clients have access to particular services.

Message enhancement — ability to modify and add information as required by the service provider. PI as an ESB has the capability to add some additional information into the message before it will be delivered to the service provider. That information might come from a database or some other system.

Protocol transformation — ability to accept messages sent using divergent protocols, i.e., Internet Inter-Orb Protocol (IIOP), SOAP, HTTP, JMS, JDBC, etc. This capability consists of two aspects: logical — an ESB must understand the protocol (its semantic and syntax) and physical — an ESB must have a component suited for operating using that protocol, i.e., HTTP server, SOAP server, JMS client, etc.

Service mapping — ability to translate a business service into the corresponding service implementation and provide binding and location information. It is a mapping performed by an ESB between abstract business service and implementation service (IP address, port, name of the method, etc.).

Message processing — ability to monitor the state of the received request. For the client, sending a message to the ESB (SAP PI), the most important thing is that a sent message should never be lost. In order to achieve that, the ESB has to monitor the state of the message — is it already processed by the service provider, was the processing successful, is the service provider available, etc.

Process choreography — ability to manage complex business processes that require the coordination of multiple business services to fulfill a single business service request. This functionality enables the client to perceive business requests as one single request, while in fact its execution may trigger the execution of multiple business services. It is usually implemented as a BPEL, which is a language enabling business process modeling.

Service orchestration — ability to manage the coordination of multiple implementation services. The difference between previously described process choreography and service orchestration is the type of service being managed — in case of service orchestration it is an implementation service and in case of process choreography it is a business service.

PI support for established ESB integration patterns: SAP PI as an ESB supports established ESB patterns, including VETO and VETRO. VETO stands for validate, enrich, transform, and operate. It is a widely used integration pattern in ESB solutions. The VETO pattern ensures that data exchanged in an ESB will be consistent and valid. Each component in the VETO pattern is typically implemented as a separate service and can be configured and modified independently of any other component.

Validate: The aim of the validate step is to ensure that messages received by the service provider will have proper syntax and semantics. This step should be performed independently — not inside the service provider because that solution would limit the reusability of validation and complicate any further modifications of it. Moreover, implementing validation as a separate component would ensure that every

EAI Playbook 14

Page 20: 212458274 SAP Integration Playbook V1

Continues Evolution

message that gets to the service will be in a proper format, thus would simplify the design of service provider and enable the operate step to focus on business logic. The simplest way of validating an incoming message is to check whether the message is a well-formed XML document and conforms to the XML schema or Web Services Description Language (WSDL).

Enrich: The aim of the enrich step is to add some additional data to the message content that would be needed by the service provider, for example, information about the customer, who has placed order. That information might be fetched from the database or might be the result of invoking another service.

Transform: The aim of the transform step is to change the message format to the one accepted by the service provider. This step might transform the message into an internal message format of the service provider, releasing the operate step from the need to perform this task and, therefore, increasing its efficiency.

Operate: The aim of the operate step is to invoke target service or to interact in some way with the target application.

VETOR/VETRO pattern: The VETOR pattern is a VETO pattern, which introduces a new component placed right before the operate step — the Router. The aim of this step is to decide whether a message should be delivered to the service or not. The router is typically implemented as a part of the transform component or as a separate service.

Integration strategy

In order to define the integration strategy, the integration components will first be discussed. The integration components identify various functions and architecture recommendations based upon integration requirements identified by the Business Advisory (BA) teams. Specifically, the following integration components will be addressed:

Integration architecture strategy

Communications middleware

Data transformation and formatting

Application and technical connectivity

BPM

Integration architecture strategy

The Integration Architecture describes the strategies and techniques for the physical implementation of the technical components that make up the execution (production) architecture.

ESB integration architecture

The ESB is a hybrid architecture approach that includes the integration patterns described in the following sections. It combines characteristics from these patterns to fulfill the documented integration requirements. This architecture reduces the risk of outages to the entire integration infrastructure. Scaling or expanding hybrid architecture is also easier since changing a component will not affect all other components in the infrastructure. It also allows for the EAI solution to evolve into the ESB component of the SOA implementation.

Also note: With this hybrid architecture, there may be multiple instances of patterns introduced for various reasons. For example, there may be clusters of applications with their own hub. In this situation, there may be high traffic within a hub and some lower level of traffic between hubs. Multiple hubs may also be employed for different types of messages.

EAI Playbook 15

Page 21: 212458274 SAP Integration Playbook V1

Continues Evolution

Figure 1: Example hybrid architecture

Hub-and-spoke integration

In this pattern, applications/systems communicate directly to a middleware hub, which is responsible for routing the message to the appropriate target application or system. For publish/subscribe messaging, subscribers register with the hub. For request/reply messaging, the hub acts as a broker between the requesting application and the servicing application. In either case, applications are loosely coupled and have no data dependency with each other.

Figure 2: Example hub-and-spoke architecture

EAI Playbook 16

Integration Services Layer

App

lication

C

onn

ectivity

PK

MS Hub

STAR

Kronos

Application Connectivity

Application Connectivity

App

lica

tion

C

onn

ectiv

ity

SA

P

Hub

Integration Services LayerApplication

Connectivity

PkM

S HUBSAP PI S

AP

Hyperion

Kronos

Application Connectivity

Application Connectivity

Page 22: 212458274 SAP Integration Playbook V1

Continues Evolution

Key points:

A hub provides a single control point where other features can be added (e.g., data transformations and routing).

The hub-and-spoke architecture provides the ability to quickly respond to new or changing environments, and it provides greater control of transaction monitoring and role-based access control (security).

A hub-and-spoke architecture makes monitoring and problem diagnosis less complex. There are fewer physical components, a less complex environment, and a well-defined path that all messages follow.

A logical point-to-point communication channel can be implemented over a physical hub-and-spoke architecture.

Scalability, load balancing, and availability are considerable implementation issues in a hub-and-spoke architecture.

Distributed (federated) integration

A distributed pattern is conceptually similar to the hub-and-spoke pattern, but distributes integration components at the source or target systems to perform message translation, routing, etc., which are governed by the centralized rules. This pattern reduces load on the hub, provides support for geographically dispersed regions, load balancing, disaster recovery, and fault tolerance.

Figure 3: Example distributed/federated architecture

Key points:

Provides flexible scheduling and transport of high-volume traffic between distributed components without requiring the network traffic to flow through the hub.

Through appropriate deployment, can maintain guaranteed delivery of events, even if network is unreliable (through existing phase-commit pub/subsolution).

Operational requirements, such as system monitoring, message monitoring, and end-to-end message tracing, are more difficult to provide in this environment.

EAI Playbook 17

Integration Service Layer

SAP PI(Process Rules)

IntegrationC

omponent

PK

MS S

AP

Hyperion

Kronos

IntegrationComponent

IntegrationComponent

Network

Page 23: 212458274 SAP Integration Playbook V1

Continues Evolution

Service locator integration

A service locator architecture pattern provides a requestor application with the information needed to connect to the appropriate service provider (source application). This typically is a three-step process:

1. Services register with the service locator.

2. Requesting applications request information about the required service.

3. The requesting application and the service bind and then communicate.

While this does involve point-to-point communication, awareness or knowledge of other applications is not built into any application. Note that an application can act as a requestor and as a service. Most often, this architecture model is associated with Web services.

Figure 4: Example service locator pattern

Key points:

This can be a very simple, easy-to-use, synchronous messaging paradigm with less moving parts.

This pattern has not been proven in high-volume, mission-critical transactions.

As described above, the service locator architecture model behaves in a request/reply synchronous paradigm — although this does not have to be the case. The application connectivity component can be extended to include store-and-forward capabilities, guaranteed delivery, etc. When these features are added to the service locator model, this solution can look exactly like the distributed architecture described above (with the service locator facilitating communication between the client middleware components).

Communications middleware

Communications middleware/messaging capabilities were identified and recommendations regarding each capability have been made, including:

Message prioritization

Asynchronous communication

Synchronous communication

EAI Playbook 18

IntegrationServices

Layer

ServiceRegistry

SAP

Hyperion

Application Connectivity

Application Connectivity

Register

Request Loca

tion

Bin

d a

nd

Re

qu

est

Page 24: 212458274 SAP Integration Playbook V1

Continues Evolution

Publish/subscribe

Persistent delivery

Transaction management

Message prioritization

Prioritization of messages can be performed based upon time stamp (when messages were placed in a queue), message type, class, sending application, or message data contents. Message prioritization can impact efficiency and performance.

Key points:

Message priorities can be individually assigned by applications for each message, or the integration architecture can assign all messages of a single type to a selected priority.

Allows for the processing of higher-priority messages prior to lower-priority messages.

Allows the technical infrastructure to adapt to application requirements and SLAs.

Message prioritization will be achieved through message routing (EAI tool) or queue prioritization (messaging tool).

The level of message interrogation to support message routing can impact efficiency and performance.

Asynchronous communication

Asynchronous messaging allows an application to transfer a message to the integration services layer and continue processing. The integration services layer is responsible for queuing the message, locating the destination application, and delivering the message to the destination. Asynchronous messaging enables decoupled, near-real-time communication.

Key points:

Asynchronous messaging should be used whenever an application does not need a message reply or acknowledgement to continue processing.

Asynchronous messaging is typically implemented through a message queuing architecture using message-oriented middleware or the messaging components of an EAI tool (e.g., SAP PI and WebSphere MQ).

Asynchronous messaging provides eventual guaranteed messaging; however, there is no guarantee of response or reply time. The messaging architecture — not the application — is responsible for ensuring delivery.

Because asynchronous messaging enables “send-and-forget” communication, this is a less complex form of application communication.

Errors are logged and associated messages are sent to an error queue for processing.

Asynchronous communications can send and reply to transactions in near real time.

Does not build in architecture dependencies during design and development activities.

Synchronous communication

Synchronous communication establishes a direct connection (session) between two applications. Unlike asynchronous communication, synchronous communication allows an application to receive an immediate reply and/or ensure that a transaction is processed immediately.

EAI Playbook 19

Page 25: 212458274 SAP Integration Playbook V1

Continues Evolution

Key points:

Introduces dependencies/wait states into the architecture design.

Excessive time delay in message delivery is considered to be an error condition.

Synchronous messaging is typically used when the application requires immediate delivery of a message to a cooperating application before processing can continue.

The use of a transaction processor to provide synchronous messaging results in guaranteed message delivery.

Publish/subscribe

Publish/subscribe is a special data delivery capability that allows processes to register an interest in (i.e., subscribe to) certain messages. An application then sends (publishes) a message to the integration architecture, which forwards it to all processes that have subscribed to it. An example of a publish/subscribe scenario is item catalogs distribution. There are several consumers of catalog files, such as Echo, EDI (832s), and third-party applications that require copies of the catalog file. In this instance, SAP generates a generalized catalog that is exposed to the subscribers by the integration services layer at various end points each published in a customized catalog format.

Key points:

Publish/subscribe is a coordination capability that matches and links a message initiator (publisher) with receivers (subscribers) in a dynamic fashion.

Requires the integration infrastructure to establish and maintain the relationship between publishers and subscribers. Enables routing data messages to a destination, only if the destination has requested to receive the data type.

Enables messages to be sent to multiple target applications simultaneously — reducing the need for point-to-point interfaces within the EAI state architecture.

Persistent delivery

Persistent delivery ensures that messages are not lost as they pass through the integration architecture. Mechanisms are in place to track and restore messages as well as confirm messages have been received.

Key points:

Ensures transport persistence and data transfer between the integration components, despite any failures that may occur.

Ensures messages are delivered regardless of receiving application availability at the time the message is initially sent.

Ensures messages are sent once and only once. Reduces duplicate messages and potential errors.

Can be accomplished through configuration of message queues or through implementation of a transaction management infrastructure (example —Order acknowledgement).

Transaction management

Transaction management ensures integrity of message delivery as they move within the integration architecture. It supports capabilities, such as two-phase commit, guaranteed delivery, rollbacks, and the ability to provide the appearance of synchronous communications via a queue-based (asynchronous) technical infrastructure.

EAI Playbook 20

Page 26: 212458274 SAP Integration Playbook V1

Continues Evolution

Key points:

Ensures that multiple applications/data remain coordinated.

Avoids changing data within one database if related changes cannot be made within other databases.

May be managed by applications, not the integration architecture, since an application needs to be able to select its partners in a two-phase commit and respond to any errors.

Tightly couples applications.

Data transformation and formatting

The data transformation and formatting layer of the integration services architecture is responsible for the conversion of data, message content, and syntax to reconcile the differences between data from multiple heterogeneous systems and data sources. This layer is responsible for maintaining the information structure of the messages passed between systems and their meaning in a format that can be understood by other applications. Data transformation and formatting considerations include:

Message layout

Distributed data transformation and formatting

Centralized data transformation and formatting

Message layout

Information passed into and within the integration architecture is formatted as a message. A vendor-neutral format, such as XML, for messages going in and out of the integration service layer is highly desirable. The message layout must also support tracking, e.g., through the header information, for ease of maintenance and debugging. The primary message type for interacting with SAP is the iDoc.

Key points:

A standard structure (format and layout) of the header, metadata, and data sections will be defined and consistent for all messages.

The specific contents of header and metadata (example: routing information) needs to be defined and consistent for all messages.

The data section contents must be defined explicitly for each required message sent to the integration services layer.

Distributed data transformation and formatting

The data transformation and formatting can take place within the sending/receiving application or within a distributed integration component located with the application, rather than at the centralized integration services layer. A distributed integration component is an adapter, custom program, or integration tool specific to the application (example: Hyperion Integration Services).

Key points:

A standard format is exchanged between applications and each application transforms the data into the required format.

Reduces the number of required transformations and data formatting within the integration services layer thus reducing complexity.

EAI Playbook 21

Page 27: 212458274 SAP Integration Playbook V1

Continues Evolution

Centralized data transformation and formatting

The data transformation and formatting can take place at a centralized location, such as within the integration services layer.

Key points:

There is the potential for a single point of failure (SPOF) for a major part of the enterprise because the transformation and formatting is no longer distributed. Awareness is required to identify and incorporate the appropriate failover scenarios into the architecture design.

Less resource intensive than in a distribution environment if the same transformations are taking place for multiple applications.

Application and technical connectivity

The application and technical connectivity layer provides reusable, noninvasive connectivity between packaged software (e.g., SAP, Hyperion, and Kronos) and custom legacy systems. This layer is enabled by reliable, event-driven messaging. A number of connectivity options exist to interface with the integration services layer.

Application connectivity options

The table below provides a definition for each of the application connectivity options listed above with a summary of key points.

Application connectivity options

Connectivity option Key points

APIs

Specific methods or functions made available by an application through which another program can make requests of this application.

Provides a level of abstraction between the integration architecture and the application.

May have platform dependencies.

Database access

Direct access to the database of the program through the invocation of embedded SQL command or by call store procedures to create, read, update, or delete information relevant to the communication between the applications.

System-level issues, such as connection pooling.

Tight coupling between database and integration architecture unless a data access layer is used.

Message queue

Message queues provide for the transport and routing of messages between applications and systems.

Relatively easy to implement.

Inherently point-to-point communications.

Remote procedure call/remote method innovation

Modeled on remote object/component invocation where business services and data encapsulated in one application are requested directly through a synchronous call by another application.

Tight coupling.

Primarily used for internal enterprise integration.

Requires object serialization.

Web service

A software system designed to support interoperable machine-to-machine interaction over a network. It has an interface described in a standards-based format (specifically WSDL). Other systems interact with the Web service in a manner prescribed by its description using SOAP messages, typically conveyed using HTTP with an XML serialization in conjunction with other Web-related standards.

Suitable for heavy traffic in terms of cost.

Web service technology and standards are maturing.

Synchronous and asynchronous communication.

Based on open standards.

EAI Playbook 22

Page 28: 212458274 SAP Integration Playbook V1

Continues Evolution

Using the connectivity options discussed above, the following interfaces to the integration architecture are discussed:

Existing business systems

Internet/portal applications

Third-party application packages

Communication with SAP

Integration technologies options

The integration strategy leverages technology assets, standards, and resources to fulfill integration requirements and create effective solutions. It is a systematic approach to integrate SAP, packaged applications, legacy applications, and data. Specific classes of products, including EAI, Web services, and Extract Transform and Load (ETL) tools, offer specific solutions as part of an overall integration strategy approach. An integration services layer is most successful when it uses the correct combination of the technologies employed within these different classes of products.

A summary listing of the key technology options is provided in the table below. Below table provides a recommended usage matrix for these technologies.

Key technology options

Technology option Description

EAI EAI is a set of tools and technologies that enables standards-based integration of disparate applications within and outside an enterprise. EAI tools can be used in conjunction with Web services implementations or independent of them.

Web services Web services are a collection of technologies (including WSDL, UDDI, XML, and SOAP), which enable the creation of programming solutions to specific messaging and integration problems.

ETL ETL technologies enable the extraction of large volumes of data from disparate source systems, the transformation of that data, and the loading of that data into target systems. The process is typically performed through batch loads, although several solutions now offer near-real-time data integration.

Recommended usage matrix

Requirement EAI Web services

ETL

Small message file size/discrete amounts of data • •

High frequency of transmission of small amounts of data • •

Asynchronous (near-real-time or batch) message or file-based communication • •

High number of connections to disparate applications • •

BPM across multiple systems or applications •

Bidirectional flow of data between applications • •

Message-based integration with guaranteed message delivery •

Additional instances for load balancing •

Redundant configurations for automated failover •

Medium-to-high complexity of data transformations • •

Processes driven through batch processing and scheduling •

Desire to leverage Web technologies and the Internet •

Capabilities need to be implemented in a proprietary manner (security, transactions, workflow, and reliable messaging)

• •

EAI Playbook 23

Page 29: 212458274 SAP Integration Playbook V1

Continues Evolution

Recommended usage matrix

Connectivity from disparate database components to a single database architecture •

Need to perform aggregate operations against batch data •

Accessible from external applications or systems (possibly running on different platforms) •

Need for a metadata repository •

Effort estimation and staffing plan

Project estimation tool

One can leverage the Project Estimator and Planning Suite (PEPS) to help arrive at the estimates for the interface objects using PI. The same tool can be used for estimating the entire scope of Reports, Interfaces, Conversions, Enhancements, Forms and Workflows (RICEFW) objects and not just interfaces. The below attached excel sheet can be used as a starter to feed into the PEPS. It includes the end-to-end estimations of interfaces. This tool gives the detailed as well as summary of estimations across all the phases of the project.

It also helps arrive at a resource breakup based on the estimated hours. This estimation tool also details all the assumptions made while deriving the estimations. The complexity levels are also described so as to categorize the objects across different complexity levels.

It is recommended to directly use the PEPS to help arrive at the estimates, given the scope of the objects.

Staffing/resource planning model

The estimates arrived at using the PEPS tool helps drive the staffing plan. It helps arrive upon the minimum number of Advanecd Business Application Programming (ABAP)/PI resources needed from RICEFW perspective. However, there are some additional PI resources/roles that would be required irrespective of the scope of the PI objects. The same has been outlined in the staffing plan below.

The sample resource plan can look like as below.

Note: The numbers included in this plan are just for illustration purpose. Every project will be in need of a development/integration lead and basis support for PI; however, the number and level of other resources like programmers, analysts, and their commitment are best identified based on the scope of the work and estimations derived out of the estimation tool.

Staffing plan

(Fill in grid to whatever level of detail is required, starting with most immediate time frame and moving toward most distant time frame.)

Project Name:

Project Manager:

Proposed Project Begin Date:

Proposed Project End Date:

EAI Playbook 24

SAP Project Initiation Estimation hours v11_LATEST.xlsx

Page 30: 212458274 SAP Integration Playbook V1

Continues Evolution

Staffing manager:

Resource

(personnel category) Likely source level

Level of commitment

(utilization rate) Location Number

Interface development lead/integration lead

Manager and above Full time On-site 1

PI senior programmer Senior consultant Full time On-site 1

PI programmer Consultant Full time 1 on-site/2 offshore 3

BASIS support Senior consultant Full time On-site 1

Quality assurance Consultant Part time Offshore 1

Business analyst Business Technology Analyst (BTA)

Part time Offshore 2

EAI Playbook 25

Page 31: 212458274 SAP Integration Playbook V1

Continues Evolution

Envisioning

This section will help envision the future state of the architecture and landscape. Once the middleware tool of choice is picked, this section will help formulate the strategy for the integration implementation and plan the transformation.

After analyzing the current landscape, scope of improvement in the business process and scope of improvement in the IT landscape, the best possible enterprise landscape is envisioned. This step should be executed considering the basic principles like:

Governance

Security

Scalability

Performance

Standards

Interface/scenario-related decision factors

New PI 7.3 feature and function usage

What are your business reasons for looking into PI 7.3? Does your business require the new SOA-related features, such as ESR with enhanced modeling capabilities, the Services Registry, and Web Service standards? Or do you want to go to PI 7.3 because your XI 3.0 system is going to run out of SAP-standard maintenance sooner or later and you would like to invest in the latest product version with enhanced high-volume support for EO/EOIO messages?

As you can see from the above diagram, the main recommendation is to check out the new features and function based on a new PI 7.3 installation, instead of doing this all in one step with the upgrade of the existing system at the same time to get new features and functions up and running.

EAI Playbook 26

Page 32: 212458274 SAP Integration Playbook V1

Continues Evolution

Number of connected business systems

How many business systems are currently connected to your productive XI/PI system? How many types of business systems are connected, that is, how many different technologies can be distinguished on the business systems side?

In the graphic above, we recommend that you go for an upgrade if you have many business systems. The question now remains, what does “many” actually mean? This rating and number has to be provided on a customer-specific basis, unfortunately. For example, a customer has 20 plant systems and they are all connected by the same interface scenario and the same adapter type. In this case, the transfer of such identical systems might be easier to achieve than by moving 20 completely individual systems with individual interfaces and many types of adapters.

Our rule of thumb would be to say less than 25 different business system types could still be managed with a new installation and phaseout in a well-organized project. More than 25 different business system types might be not possible to handle with separated milestones; here a “big bang” approach for the whole company would be the better decision to make. Again, this value of 25 different business system types is not a fixed number it is simply a recommended value. You should decide in your own project whether the effort of manually redirecting systems from a running XI/PI system to a new PI 7.3 system can be achieved in your environment and how much you would rate this effort.

Number of live interfaces

Nearly, the same arguments provided under 4.3.2 can be applied to this criterion. We already consider more than 100 productive interfaces as many interfaces and, based on this number, the rating would favor moving toward an upgrade rather than a new installation.

Type of high-volume interfaces

EAI Playbook 27

Page 33: 212458274 SAP Integration Playbook V1

Continues Evolution

Message packing for EO/EOIO messages is activated by default in the new installation of PI 7.3. This functionality is already available in PI 7.0, and, therefore, the upgrade should be easy as the functionality itself does not require any interface design changes.

There are two separate exceptions to this which might favor you choosing a new installation instead. The first is if you are planning on using the new ccBPM packaging functionality. Message packaging in Business Process Engine (BPE) helps improve performance by delivering multiple messages to BPE process instances in one transaction. This can lead to an increased message throughput. In some cases, it can lead to an increase in the runtime for specific messages, for example, the time the message must wait whilst an appropriate message package is created. To be able to take advantage of message packaging for (ccBPM) integration processes, you must activate it globally for the BPE and individually for the process types that message packaging could be useful for.

Since this configuration will affect the runtime of all your current ccBPM processes, it is recommended that you choose a new installation where you can perform this configuration independently of your current landscape if you have many ccBPM integration processes.

The second exception is if you are planning on moving scenarios to the AEX that you currently have running on the AAE. This will require a new installation of the AEX (as stated earlier, the AAE cannot be upgraded to an AEX) and a change to the configuration of the AAE scenarios. For this reason, it is recommended that you first get more familiar with PI 7.3 by using a new installation as opposed to an upgrade.

Long-running ccBPM processes

This criterion follows the same path as the auditing criteria for the IT/operations-relevant decision factors. If, for example, you have long-running ccBPM processes that collect daily messages for a whole month and start the real business-relevant processing at the end of the month, we consider these to be long-running scenarios. To keep them running and open, an upgrade has to be the first choice, otherwise, you have to define and describe handling decisions regarding what is to be done for final transfer of these scenarios from the old product XI/PI system to the new PI 7.3 system. If you only have processes that run for less than 24 hours, we do not consider them to be long running.

EAI Playbook 28

Page 34: 212458274 SAP Integration Playbook V1

Continues Evolution

Implementation

Design — Best practices (Interface Strategy, Error Handling Strategy…, etc.)

Purpose

This document aims at explaining the integration options and strategy using SAP PI. Several business scenarios are explained in this document, and the best integration approaches have been suggested for given scenarios. It can be used as a reference document for any PI implementation project and can act as a base for integration strategy document for our clients.

Audience

This document can be used by Project Managers and interface architects.

Assumptions and risks

This document has been created based on the latest version of PI, i.e., PI7.3 and the newer releases may have some changes, which will obviously be not present in this document. Performance, version upgrade, change in the product architecture could be some of the possible risks, but SAP has always been very diligent in addressing this kind of issues.

Scope

PI is normally used in landscapes having at least one SAP system, like SAP ECC. PI can seamlessly integrate two SAP systems, an SAP, and a third-party system or two third-party systems. PI can be used for synchronous as well as asynchronous communication. This document describes various integration patterns, integration options provided by SAP PI. It also includes the integration guidelines for various integration scenarios.

Executive summary

One of the primary benefits of a packaged product, like SAP is to provide an out-of-the box solution for executing various business processes. However, it is practically impossible for one application to provide all the functionality required. It is always important to choose best of breed applications to execute these complex processes. This raises a need for integrating these best of breed applications through a common integration platform.

To that point, this document aims at providing a strategy for development of interfaces between SAP ECC, CRM, and third-party applications using SAP PI as the middleware tool. The strategy presented in this document combines industry best practices. The architecture and methodologies presented are designed to minimize both short-term development costs and long-term maintenance costs simultaneously.

Integration platform

PI is a product offering from SAP for cross-system application integration involving both SAP and non-SAP systems. It also acts as a SOA middleware, capable of integrating disparate systems using services. Built on the SAP NetWeaver platform, it uses a combination of both ABAP and Java stacks to provide a reliable, high-performance, standards-based, and service-oriented platform for integrating SAP with non-SAP solutions.

EAI Playbook 29

Page 35: 212458274 SAP Integration Playbook V1

Continues Evolution

Integration patterns and key considerations

Integration scenario Key considerations

Synchronous

Inbound to SAP Performance

High availability

Reprocessing

Logging

Outbound from SAP Performance

High availability

Reprocessing

Change in schema

Asynchronous

Inbound/Outbound to/from SA Reprocessing

Guaranteed delivery

B2B Reprocessing

Guaranteed delivery

Use of third-party adapters

Mapping complexity

Inbound Batch to SAP Performance

Message split/bundling based on payload size

Reprocessing

Outbound Batch from SAP Performance

Message split/bundling based on payload size

Integration strategy for SAP systems

SAP PI is a very mature and robust integration platform integrating SAP and Non-SAP Systems using various message and transport protocols. Based on quality of service, interfaces can be broadly divided into synchronous and asynchronous ones. PI has got setup adapters, which help us integrate our systems in various message formats based on the business requirement.

Synchronous calls, are made from SAP system to an outside world, can be established using Business Application Programming Interface (BAPI)/RFC, Proxy, Plain HTTP, and Web services. Similarly asynchronous calls are made using File, Proxy, and JDBC.

Asynchronous interfaces

When two systems interact with each other and are not really interested in a quick response, this is referred as asynchronous communication. IDoc, JDBC, JMS, Proxy, and File-based approaches are commonly used. Persistene, Error handling and reprocessing, performance…, etc., are discussed to decide the best integration approach for asynchronous interfaces.

File-based integration

File-based interfaces are tightly coupled with application. File formats are fixed and the reading application can read the file. Some SAP-standard transactions/programs create file or expect file as input. In these cases, we need to create file on SAP application server in case of inbound interfaces and pick files from application

EAI Playbook 30

Synchronous

BAPI/RFC Proxy Web Services

Asynchronous

File Proxy IDocs

Page 36: 212458274 SAP Integration Playbook V1

Continues Evolution

server in case of outbound interfaces. This can be a solution for standard SAP interfaces. But this integration approach may not be the preferred for custom developments.

IDoc-based integration

This is a very standard approach to post SAP-standard documents and to send messages to third-party systems. Application Link Enabling (ALE) layer provides standard error-handling, persistence, and reprocessing mechanism for IDocs. Standard IDocs can be used directly in most of the interfaces. If the existing IDoc does not meet the business requirement, it can be extended to add extra fields. Custom IDocs can also be created, if required. All the IDocs are persisted in the system and can be reprocessed, if required.

ABAP proxies

This is an outside-in development approach with adapter-less out-of-the-box integration. The proxy framework acts as a layer over the actual functionality and hides technical details from application developer. Transforms language-specific data structures into XML and vice versa. Ensures technical connectivity with the integration engine and guaranteed delivery. The Proxy model is an important element of the Enterprise Services Architecture.

SAP’s latest integration approach, using XI SOAP (extended XML document format), is a very common integration approach for custom developments. These can be synchronous and asynchronous. Asynch messages are persisted in SAP by default, whereas option to persist synchronous messages is available. Asynchronous proxy error messages can be reprocessed in target SAP system for inbound interfaces. In case the middleware is not available, and the outbound messages fail, they can be resent from SAP.

In case of inbound interfaces, custom code can be written to call a BAPI to poststandard documents like Sales Order or Delivery Document. Multiple documents can also be posted, if required.

For custom asynchronous scenarios, we can go for IDoc or Proxy approach. IDoc and Proxy approach details are explained below.

JDBC

SAP PI can help us integrate SAP with variety of data bases like oracle, DB2, etc. This needs the driver program from JDBC.

JMS

JMS adapters can be used to integrate SAP with queue-based tools. This helps us in integrating JMS-based communication.

IDoc Proxy Comments

Middleware Dependency

× Proxy-based interfaces can be developed using SAP PI, where as IDocs can be used with other middlewares (with SAP adapter).

Performance in ECC Message Processing

× Posting data in ECC using proxy awards better performance.

Performance in ECC — Message Transmission

We can create separate IDocs in PI and still bundle them in PI, while sending them to ECC limiting number of tRFC calls to ECC. Proxy and IDoc messages can also be bundled together for better performance in PI.

Performance in PI/Middleware

× Message processing will be easier in PI when we use proxies as:

“XML to IDoc XML” transformation is not involved

“IDoc XML to IDoc” transformation is not involved

Message Tracking at transaction level

× If we create multiple IDocs, one for each transaction, it is easier to pinpoint the failed IDoc and the data inside.

EAI Playbook 31

Page 37: 212458274 SAP Integration Playbook V1

Continues Evolution

IDoc Proxy Comments

If we send everything to ECC using proxy and create custom IDocs for failed messages (using BAPI/ALE), we need to correlate the proxy message and IDoc in ECC.

Message Reprocessing Failed IDocs and proxy messages can be reprocessed in ECC.

Maintenance × It is easier when we use IDocs. We may not be creating a proxy message and then a ZIDOC using BAPI/ALE or some other method for error handling.

Persistence Considerations

× In case we create 1000 IDocs from a file containing details about 1000 documents, we may need more disc space. Sending them using single proxy call and creating IDocs for failed transactions can be considered.

Message Persistence IDocs as well as proxy messages are persisted in ECC.

Recommended approach for asynchronous inbound interfaces

1. SAP-standard programs expecting file input — In cases where SAP-standard program is expecting file input, definitely the file-based approach should be preferred.

2. SAP-standard document posting (e.g., Sales Order and Delivery) — SAP has provided standard IDocs for master as well as transactional data. So, for example, we are getting a message to create a material in the system or to create a sales order, etc., standard IDocs should be used and can be extended to add extra fields, if required.

3. Single custom transaction in SAP per source message — In cases, we need to execute custom logic and post any document in SAP or to create an entry in a table proxy should be preferred over other mechanisms. Asynchronous proxy messages are persisted in ECC as well as PI, and can be reprocessed in case of errors.

4. Multiple custom transaction in SAP per source message — Custom IDocs should be created to post the IDocs in SAP. ALE layer takes care of persistence and reprocessing. We can set custom IDoc status after each logical step and messages can be reprocessed, if needed. We can execute only the relevant steps while reprocessing by using IDoc status in our logic to post the document.

5. Sender system is EDI based — Most of the EDI standard documents can be readily mapped to IDocs using translation tools or adapter available in the market. Hence, IDocs should be used to post data in SAP for EDI interfaces. We might need to involve third-party EDI providers, since PI cannot handle EDI maps and transmission.

6. Very large source payload, creating thousands of documents in SAP per source file (very rare scenario) — Very large payloads should be sent to SAP using proxy and posted in loop. PI provides high-volume support for proxy messages using SOAP adapter bypassing integration engine. In case of errors, IDocs should be created for failed transactions using BAPI-ALE in SAP to keep track of the failed messages and reprocessing, if possible. We can also persist them in custom tables if the message format is simple.

Recommended approach for asynchronous outbound interfaces

1. SAP-standard programs creating file — Many FI transactions create files in SAP. In case we need to send them to a partner or system, file-based interface should be preferred. So, in all such cases where files are generated by standard programs, we should use file-based interface instead of translating the message to a different format.

2. SAP-standard document posting (e.g., Sales Order and Delivery) — Most of the transactions like Purchase Order Creation, Sales Order Creation, and IDocs are created without any custom development. These can be sent to vendors, customers, and banks or any other system in IDoc format. IDocs are widely used and we have many translators available in the market to translate them from IDoc to native message

EAI Playbook 32

Page 38: 212458274 SAP Integration Playbook V1

Continues Evolution

format. Using PI, we can translate IDoc to SOAP, File, or any other industry-standard format. It may involve use of third-party adapters in some cases.

3. Messages sent from SAP system in custom format — Custom interfaces can be built using proxy approach. Any custom message format can be created and any custom output can be sent out using proxy. This helps us in creating simple and robust solution in very less time.

4. Target system is EDI based — IDocs should be used to send the data out of ECC. We might need to involve third-party EDI providers, since PI cannot handle EDI maps and transmission.

Synchronous interfaces

Synchronous interfaces with ECC system can be developed using BAPI/RFC; Web services; and Proxy. Platform independence, open standards, and persistence are discussed to decide the best integration approach for synchronous interfaces.

BAPI/RFC

BAPI- or RFC-enabled function module can be invoked from external systems or middleware’s with SAP adapter. These are not persisted by default. In case we need to persist these messages, we should use BAPI-ALE to create IDocs for failed messages or some other logging mechanism.

PI or any other middleware with SAP adapter can communicate with ECC synchronously my making RFC calls.

Web service

This is not based on open standards and, hence, SAP has provided the flexibility to expose them as Web services. These services can either be predelivered content from SAP or custom created. BAPI/RFC-enabled function modules can be exposed as Web services. Request/Response payloads are not persisted in SAP by default. Middleware may not be needed to invoke a Web service as soap runtime lies on ABAP stack of SAP systems and can be invoked directly. If routed directly (bypassing middleware), there can be significant amount of gain in terms of performance.

Synchronous proxies

These are SAP-centric solution. Easy to create and maintain using PI. These can be persisted in ECC. Business scenarios where we need to send messages and expect a real-time response, synchronous proxies should be preferred.

Recommended approach for synchronous inbound interfaces

1. Web service is available — In case we have predelivered integration contetnt available, we can download the same from SAP marketplace and use the same. These can be used directly and can be enhanced, if needed.

2. BAPI/RFC available to be exposed as Web service — In case we have a BAPI- or RFC-enabled function module, they should be exposed as Web services. Web services award platform independence.

3. SAP-delivered proxy classes (Ex FSCM) — For SCM or FSCM integration, SAP provided predelivered content as proxy and can be readily used.

4. Custom synchronous requirement — For custom synchronous interfaces, Proxy can be a preferred solution. Proxy messages can be persisted in SAP, if needed.

EAI Playbook 33

Page 39: 212458274 SAP Integration Playbook V1

Continues Evolution

Recommended approach for synchronous outbound interfaces

1. Web service provided by the target system — WSDL can be imported in PI, so that services can be called directly from PI. Proxy can be generated using the same interface definition, and can be used between ECC and PI.

2. Custom synchronous requirement with persistence in SAP — Proxy should be used as we can easily enable persistence of synchronous proxy messages in SAP systems.

Design — Error handling strategy

Purpose

Every interface design must include mechanisms to identify and correct the errors that could occur at runtime. Instead of each interface having its own individual design for handling such error conditions, a common strategy used by all the interfaces is used. This common design for handling errors encountered during interface execution is called the Error Handling Framework.

Scope

This document only covers the handling of errors that occur in interfaces in which the PI System is involved. This document does not apply to errors that occur in interfaces that do not make use of the PI Server.

Executive summary

Every interface design must include mechanisms to identify and act upon error conditions that could occur at runtime. Instead of each interface having its own individual design for handling such error conditions, a common strategy used by all the interfaces is used. This common design for handling errors encountered during interface execution is called the Error Handling Framework.

This strategy would involve using out-of-the-box SAP error handling features like alerts, workflow email notifications, CCMS, etc. Custom components need to be built in order to achieve error handling functionality, which is not standard.

The Error Handling Framework for the interfaces consists of two parts:

Error notification

System errors in SAP PI would automatically trigger the CCMS alerts, which can be monitored through the SAP solution manager.

Custom alerts from SAP would be raised to notify the error in the interface in the following scenarios:

1. Any custom validation/lookup error in SAP PI

2. Mapping errors due to missing values for mandatory fields in SAP PI

3. Any functional error occurring while data processing in a target SAP system

Specific basis configuration would be required to integrate the custom SAP PI alerts with CCMS and solution manager.

EAI Playbook 34

Page 40: 212458274 SAP Integration Playbook V1

Continues Evolution

Message reprocessing

Though the best approach for resolving the error caused due to the incorrect data would be to resend the correct data from the source system, but in addition to it, the error handling framework would have the option for reprocessing the erroneous message. The erroneous message can be reprocessed by the following mechanism:

1. Reprocessing the message in SAP PI by using the standard SAP PI tools — SAP PI stores all the messages being passed through it. Standard tools are available in SAP PI to resend the message to the target system, in case of an error in SAP PI.

2. Reprocessing the erroneous data in SAP ECC — The message to SAP ECC from SAP PI would be passed either in the form of an IDoc or a proxy. In case of an error in IDoc, the data can be reprocessed by using the standard transactions available for reprocessing the IDocs. If the data integration is being done by using a proxy, custom mechanisms have to be used to reprocess the errors.

Definitions

Terms Explanations

Functional Errors Errors that occur as a result of incorrect business data or configuration

Technical Errors Errors that occur as a result of technical capability not operating as anticipated

Trigger An event that will kick off a business process in SAP/Legacy System

Connectivity Moving data from the source system to PI or moving data from PI to the target system

Process Logic used within the SAP/Legacy system to process the data

Transformation Mapping data from source to destination formats

POF Point of Failure

Error classification

Errors can be classified based on the combination of POF, such as Trigger, Connectivity, Transformation, Process, and error types Functional or Technical. The table below shows the various classifications based on the above parameters.

1 Trigger Functional Failure in an Outbound proxy/Function module/Report Program while collecting back-end information

2, 4 Connectivity Technical Failure in the connectivity between PI and SAP/Legacy System

3 Transformation Functional Transformation of data from Source to Target format in PI

5 Process Functional Business Validation Errors in SAP once data is loaded into SAP via an IDoc or Proxy and same on Legacy side

An error handling strategy should basically contain mechanisms for notification of errors and, if required, mechanisms to reprocess the errors. The following sections brief about both of the above mechanisms with respect to the various error classes:

EAI Playbook 35

Page 41: 212458274 SAP Integration Playbook V1

Continues Evolution

It should be noted that SAP provides many mechanisms to support error handling. Custom components need to be built in order to achieve error handling functionality, which are not standard.

POF trigger

The error type here would be functional.

These errors would occur while data is being collected for sending to a middleware. Typical points of occurrence would be:

A class, report, and function module calling an outbound proxy

A function module behind the creation of an outbound IDoc

A standard SAP program generating a file

These issues would be related to the business processing of information on the sender side. The following sections detail the various mechanisms for error notification and error reprocessing:

Error notification

Examples of a few reasons for these errors on the outbound side would be:

Invalid input to an outbound program, e.g., invalid material, invalid plant, and invalid customer

Invalid correlations, e.g., invalid material for a plant, etc.

Out-of-the-box features can be used for error notification in such a case.

Out-of-the-box features like ABAP messages can be displayed both in an online and batch mode.

Error reprocessing

Error reprocessing will be done manually, due to data input issues.

POF connectivity

Error type here would be technical.

These errors occur during the sending of information to the middleware. These can occur as follows:

At the source, SAP system occurring as a result of incorrect proxy connectivity for proxy-based communication.

At the source, SAP system occurring as a result of incorrect partner profile for IDoc-based communication.

At the middleware, e.g., SAP PI occurring as a result of adapter-specific errors.

Error notification

For the adapter-specific errors, which occur at the middleware, out-of-the-box features like alert category definition, agent assignment, and alert rules configuration (via the RWB) need to be used.

In case of outbound IDocs, we would have the following error use cases:

Partner profile is incorrect, hence, the IDoc could not be posted. In such a case, standard task TS00007989 would be triggered. The notification would go to the IDoc administration maintained via the transaction OYEA.

EAI Playbook 36

Page 42: 212458274 SAP Integration Playbook V1

Continues Evolution

Case where an outbound IDoc goes into a syntax error. In such a case, Standard Task TS00008070 is triggered. The agent to receive the notification can either be maintained in the specific partner profile or in general view specific to the partner number. In case the above agents are not maintained, the error is propagated to the IDoc administrator maintained via OYEA.

Error reprocessing

Error reprocessing would be required in case of failed outbound IDocs.

SAP provides out-of-the-box features for error reprocessing in case of failures in case of outbound IDocs. This can happen through the transaction BD87.

In case of proxy-based communication, error reprocessing can happen via the transaction SXMB_MONI.

On the legacy side connectivity, all PI adapters have their own retry mechanism. SAP PI attempts to resend the message, which resulted in error due to connectivity or adapter-related issues for a preconfigured number of times.

Once all such attempts fail, the message is set to an error state.

POF transformation

The error type here would be functional.

These errors would occur during the mapping or the transformation step within PI. The errors would result as a consequence of business validation of check performed in PI as well as incorrect data formats.

Error notification

Some examples of such errors can be:

Errors in data formats occurring due to missing values for mandatory fields, incorrect field lengths, etc.

Errors that occur due to specific business rules, which are applied on the incoming data.

Out-of-the-box and custom features would be used for error notification.

Out-of-the-box features

SAP provides out-of-the-box error notification mechanisms called alerts. The steps to use them includes definition of alert categories, assignment of agent responsibilities, and definition of alert rules.

The first step to use the feature would be to configure alert categories. The category is used to classify different kinds of alerts. This is done via the transaction ALRTCATDEF.

The definition involves configuring the recipients or the responsible agents who should receive a notification based on the alerts.

It also involves definition of container variables. These variables would be used to construct the alert message. A standard set of such containers are provided by SAP.

SXMS_MSG_GUID SXMSMGUIDC Message ID

SXMS_RULE_NAME SXMSDESCR Description of the alert rule

SXMS_ERROR_CAT SXMSERRCAT Error category

EAI Playbook 37

Page 43: 212458274 SAP Integration Playbook V1

Continues Evolution

SXMS_MSG_GUID SXMSMGUIDC Message ID

SXMS_ERROR_CODE CHAR70 Error code

SXMS_FROM_PARTY SXI_FROM_PARTY Sender party

SXMS_FROM_SERVICE SXI_FROM_SERVICE Sender service

SXMS_FROM_NAMESPACE SXI_FROM_ACTION_NS Sender namespace

SXMS_FROM_INTERFACE SXI_FROM_ACTION Sender interface

SXMS_TO_PARTY SXI_TO_PARTY Receiver party

SXMS_TO_SERVICE SXI_TO_SERVICE Receiver service

SXMS_TO_NAMESPACE SXI_TO_ACTION_NS Receiver namespace

SXMS_TO_INTERFACE SXI_TO_ACTION Receiver interface

These are configured in the alert category definitions as follows:

The containers are used to construct the alert message. To be able to relay the alert messages to a centralized alert monitoring tool like Remedy, some containers like SXMS_MSG_GUID, SXMS_FROM_INTERFACE, and SXMS_TO_INTERFACE should mandatorily be used.

The final step would be to define alert rules for the above alert category in the RWB under Alert configuration.

When an error occurs, the alert rule determines the corresponding category and further the corresponding recipient resulting in an alert message in the alert inbox.

EAI Playbook 38

Page 44: 212458274 SAP Integration Playbook V1

Continues Evolution

More details from SAP Help.

These errors can be relayed onto CCMS and then seamlessly to the solution manager via the following steps:

Scheduling a job via the RWB to transfer alerts from the alert inbox to CCMS

Setting up a CCMS template for PI errors using the transaction RZ20 in PI

EAI Playbook 39

Page 45: 212458274 SAP Integration Playbook V1

Continues Evolution

The alerts can then be seen via the RZ20 transaction, the description is similar to that of the alert category.

Custom alerts

Custom features would have to be implemented to support rule base business validation errors from PI.

All such alerts can be raised from a consolidated system. A consolidated system would have two major advantages: one proper governance of alert categories and the other being a generic framework available to support such errors.

A generic framework would include a wrapper class ZCL_CUSTOM_ALERT_HANDLER having a public method RAISE_ALERT implemented over the standard alert function module SALERT_CREATE_API.

EAI Playbook 40

Page 46: 212458274 SAP Integration Playbook V1

Continues Evolution

The following shows the method signature:

Parameters Description

P_INTERFACEID This is the Interface ID, which uniquely identifies the interface. This correlates to the alert category.

ID This is an optional importing field. Message ID corresponding to error message should be passed, if needed.

LANG This is an importing field and should have value for Error Message Language. If ID field is populated, populate this field, if needed.

V1 This is an importing field and should have value for SAP Error Variables. If ID field is populated, populate this field, if needed.

V2 This is an importing field and should have value for SAP Error Variables. If ID field is populated, populate this field, if needed.

V3 This is an importing field and should have value for SAP Error Variables. If ID field is populated, populate this field, if needed.

V4 This is an importing field and should have value for SAP Error Variables. If ID field is populated, populate this field, if needed.

MESSAGE_STRING_TAB This is an optional importing field. Populate the field with error message, if needed. The messages as a part of the container should be exactly similar to the messages that are desired to be displayed in a centralized error monitoring tool like Remedy.

A precondition to use the framework would be to define alert categories as specified in Section 8.1.1 in addition to it a generic alert category container needs to be defined.

EAI Playbook 41

Page 47: 212458274 SAP Integration Playbook V1

Continues Evolution

It should have a generic message string table along with a placeholder for the title of the alert message. This is done to accommodate any error message text, which needs to be displayed.

The alert category defined here needs to be mapped to the interface ID. This is done by means of a simple configuration table.

To facilitate a call from PI, in form of an RFC lookup, a wrapper RFC is coded over the method RAISE_ALERTS.

EAI Playbook 42

Page 48: 212458274 SAP Integration Playbook V1

Continues Evolution

The following is the detail about the parameters:

INTERFACEID Unique Identifier of an Interface

MESSAGETYPE Type of message Error, Information

MESSAGETEXT Message Text as STRING

MESSAGEID Message ID

ALERTTITLE Title of the Alert Message

SYSTEMOFORIGIN Source system where the message is generated

The RFC can be called from the PI map in the following ways:

Via the RFC lookup standard function in the message map (graphical mapping).

Via a custom Java wrapper API for XSLT- and Java-based transformations.

The JAVA WRAPPER API has a method EXECUTE with the following parameters:

HashMap MessageParam This is a collection of all the above-mentioned parameters of the RFC ZPI_RAISE_ALERTS.

String PIService Service for RFC Lookup Channel.

String PIRFCChannelName Channel for RFC Lookup.

Abstract Trace trace Trace object.

The calling application can terminate the mapping after calling the RFC, by failing the generation of the target nodes.

The RFC channel name for the lookup can be passed to the map by means of parameterized mapping technique.

Error reprocessing

Error reprocessing here is not in scope and should not be allowed, as the errors in this step arise because of errors in incoming data.

A rerun of the interface should be the possible reprocessing option.

Error reprocessing can be done from PI itself by using transactions like SXMB_MONI or Message Monitor. This should normally be avoided and done only in exceptional cases with approvals.

POF process

The error type here would be functional.

These errors would occur while the data is being processed for posting into the target SAP system. These errors are mainly business processing errors and are due to incorrect data, missing configuration, etc.

As a precondition, the functional specification will need to clearly define what constitutes a transaction. That can be a complete input file/message, a single record in the input file/message, or a group of records (based on some grouping criteria). By default, it is assumed that a record constitutes a transaction, unless otherwise specified.

Data can be posted into SAP by means of IDocs or Inbound Proxies or files.

EAI Playbook 43

Page 49: 212458274 SAP Integration Playbook V1

Continues Evolution

The following sections define the error notifications and reprocessing options based on the above three use cases:

Use case A: Inbound IDocs

Error notification

A few examples of such errors can be:

Incorrect records in the incoming data.

Missing configuration in the incoming data.

A custom framework needs to be built with pluggable components for error notification.

As discussed in Section 8.1.2, a wrapper class ZCL_CUSTOM_ALERT_HANDLER is built with a public method HANDLE_IDOC_ALERTS.

The method has just one input parameter DOCNUM.

The method is based on a simple configuration table. This table determines the corresponding Interface ID for the IDoc message based on the IDoc control record.

EAI Playbook 44

Page 50: 212458274 SAP Integration Playbook V1

Continues Evolution

The following is the structure of such a configuration table:

After the determination of the Interface ID, the method HANDLE_IDOC_ALERTS calls the method RAISE_ALERTS with the parameters as described in Section 8.1.2.

The alert category definition is a precondition for this mechanism, along with the configurations as suggested in Section 8.1.2.

The call to HANDLE_IDOC_ALERTS is made from a workflow task. The sequence of events is shown in the diagram below.

SAP trigger event ‘IDOCAPPL.INPUTERROROCCURRED’ for the IDoc parent event IDOCAPPL in case of IDoc error. A method is called within a workflow task, configured for the error event. The workflow task passes the IDoc number to the method.

Error reprocessing

Reprocessing of IDoc errors can be done from the transactions like BD87. Reprocessing should only happen in case of transient errors, e.g., errors due locks or missing configuration on the target side.

EAI Playbook 45

Page 51: 212458274 SAP Integration Playbook V1

Continues Evolution

Use case B: Inbound proxies

Error notification

A few examples of such errors can be:

Incorrect records in the incoming data.

Missing configuration in the incoming data.

The custom framework as described in Section 8.1.2 can be used for error notifications. The method RAISE_ALERTS has to be called from the inbound proxy code.

Error reprocessing

Successful transactions will be posted and custom IDocs will be generated for failed transactions to allow for future reprocessing of those failed records. We will generate one IDoc per failed transaction.

Error reprocessing could be used in the following use cases:

In case of transient issues like locks.

In case of missing configuration.

In an exceptional case when the source cannot resent the message.

It should be noted that error reprocessing should not be done in case of data-related issues, as this would not meet the auditing requirement for integration.

Custom feature needs to be implemented for error reprocessing.

The proxy interfaces do not have any standard mechanism for capturing the data in case of an error.

In such scenarios, the data can be captured in a custom IDoc.

The procedure for creating the custom IDocs in such scenarios is described below.

The first task is to identify the interface header details (Receiver, Sender, Message ID, etc.) before starting to process the data. This is achieved by using the method of class CL_PROXY_ACCESS.

Once message header details are obtained, the corresponding IDoc type and message type are to be used for this interface and can be obtained from a control table as shown below.

Then using function module SWO_GET_FUNCTION_FROM_METHOD and table TBDBA, we can obtain the underlying BAPI function module that is used for posting the IDoc data. Using function module RFC_GET_FUNCTION_INTERFACE_P, we can obtain the import, export, and table signatures of the BAPI function module. Lastly, using function module IDOC_TYPE_COMPLETE_READ, we can obtain the segment information of the IDoc type that is being used.

EAI Playbook 46

Page 52: 212458274 SAP Integration Playbook V1

Continues Evolution

Forward error handling

Forward Error Handling (FEH) is a concept of monitoring and solving errors in asynchronous message processing on the receiver’s endpoint. SAP application systems use Error and Conflict Handler (ECH) to implement FEH with the use of Postprocessing Office.

Inbound proxy in ECC system normally used to throw an exception in case something went wrong. We will try to change this behavior, and whenever an exception should be thrown, we will pass the message to the ECH service in order to be able to process it in Postprocessing Office.

Configuration steps:

Active FEH in ECC client using SPRO menu

Create an ECH business process that you need to assign to your proxy class and set up its persistence with CL_FEH_MESSAGE_PERSISTENCY.

Now you also need to create a business process in the Postprocessing Office — via a view — /SAPPO/VS_BPROC. Then, you need to assign ECH business process to PPO business process.

Assign proxy method to the FEH process and you can do this using a view — FEHV_PROXY2CMPR.

Implementation:

We need to add a new interface IF_ECH_ACTION to the proxy class and once you add this you should see a few new methods in your class.

Need to add an attribute — GO_ECH_ACTION, which is a static attribute assigned to your class.

We need to insert the code to call ECH into the proxy’s method and will be executed once the proxy’s exception is thrown. You need to populate at least ls_bapiret structure in order to send an error description to the ECH.

Once you run the message and the error should happen, you should be able to see the message with a new status in local PI monitor (transferred to external application) and also from the Postprocessing Office — /SAPPO/PPO2 transaction. In PPO, once you select the message, you should be able to see the details (like all errors) and also be able to view the content of the message.

EAI Playbook 47

Page 53: 212458274 SAP Integration Playbook V1

Continues Evolution

Once you open a message in the Postprocessing Office, you can see that there are a few possible ways to handle en error. You can either repeat (in case the error should not happen anymore), confirm (complete the message in case of an authorization issue the message was processed manually), or discard (fail message in case the processing should not be repeated at all).

Retry method:

Within this method, you can call the proxy’s method one more time with the same data. The best idea is to copy the standard method into a new one and add the same importing parameter and exception, but also add bapiret exporting parameters, which will indicate if the retry was successful or not this time. If the call was not successful again, you just need to invoke the exception and call collect method.

Confirm method:

Within this method, you can do multiple things like sending an email message to a correct user that he needs to reprocess the transaction manually, etc., and as the end thing you need to call the finish method. When you confirm the message in the Postprocessing Office, the status should get updated accordingly.

Fail method:

The last method should be used when you cannot do anything with the message and just want to delete its status, so it will no longer be reprocessed by anyone. You can also send emails and other alerts from it prior to calling the s_fail method. Again, using this method should update the status of the message in ECH.

Using any of the buttons/methods described should update the global message status.

Use case C: Inbound files

Error notification

A few examples of such errors can be:

Incorrect records in the incoming data.

Missing configuration or master data in the incoming data.

Capture the error records in a separate file in “error” directory in the original source file format and send email notification to the respective support team.

Application logs:

A set of reusable custom function modules packaged as “includes” handle file-based interfaces. One of the custom function modules provides a means of logging actions performed by file interface programs. The main purpose of this function is to write a message to the interface application log. Application logs can be viewed using transaction SLG1.

Error reprocessing

For missing configuration or missing master data, the required configuration or update will be done in ECC, and the error file can be reprocessed.

EAI Playbook 48

Page 54: 212458274 SAP Integration Playbook V1

Continues Evolution

Design — Archiving strategy

Purpose

The purpose of this document is to define, for the SAP ERP PI Team, a message archiving process and deletion process.

A high volume and a large amount of data are transferred through the PI system to other parties.

All the incoming or the outgoing messages are persisted as they enter the PI database and stored there to be used for error analysis.

To avoid the PI database from growing too fast, it is advisable to archive and delete the unneeded information. According to the volume of messages processed by the PI system, the archiving may not be needed immediately, but not actively archiving will reduce the PI performance in the long run. Most customers will not retain all their archives; there are specific cases where some data is archived and stored.

SAP has implemented specific data management functions to stop PI database overflow. These functions can be used in the following components:

PI engine

PI adapter framework

PI BPE

This document is expected to be actively maintained on an ongoing basis as and when new interfaces are developed that adopt architectural or infrastructural components (or variants thereof) not already contained in this document.

Scope

The scope of this document is to outline the process and steps involved around PI message archiving and deletion processes. This document can be extended to define the strategy of archival process.

Audience

This document is primarily intended for use by PI BASIS Team members and those people interested in knowing the message archiving process of PI.

PI engine archiving

Master table information

SXMSPMAST is the master table that holds the key information about each of the messages going through the integration engine. The data is persisted in this table based on the retention period defined for asynchronous and synchronous messages. 

If a message enters the Integration Server (IS), the field ITFACTION of SXMSPMAST table is set to INIT. Then the interface is checked for the action to take; if it is listed as an interface for archiving, the field ITFACTION is changed to ARCH. Otherwise, it changes to DEL. 

*Note: ITFACTION filed value in SXMBPMAST table that describes what action applies to the messages, but does not mean that the action has already took place. So it is only action status info, not that action already done.

EAI Playbook 49

Page 55: 212458274 SAP Integration Playbook V1

Continues Evolution

Hence, the interfaces that should be archived are mentioned in archiving list before scheduling the archiving and deletion jobs.

Procedure to schedule the jobs

1. Log into the client of the IS and call transaction ‘sm37’ — Simple Job Selection.

2. Enter “*” in the job name field and enter the ABAP programs.

3. “RSXMB_ARCHIVE_MESSAGES, RSXMB_DELETE_MESSAGES” in the “Job step -> ABAP program name” field.

4. Push the execute button; you may see the jobs that execute the archiving/deletion programs. If you find some jobs with the status that are other than finished, you may double-click that line to find out more information.

System maintenance action plan

ABAP programs responsible Frequency

RSXMB_ARCHIVE_MESSAGES Period of the archiving cycle

RSXMB_DELETE_MESSAGES Period of the deletion cycle

Defining interfaces and retention periods for archiving

We also specify for how long XML messages are retained in the database before they are deleted or archived, and how long history entries for deleted XML messages are retained in the database.

Before defining interfaces in integration engine, we have to decide retention period. It varies from company to company based on the decision.

To define retention periods for messages and history entries for messages in the database, proceed as follows:

On the Define Interfaces for Archiving screen,

Choose retention periods and enter in the corresponding fields the number of days that history entries marked for deletion or XML messages marked for deletion or archiving are to be retained in the database.

If we want processed synchronous XML messages without errors to be deleted immediately, enter 0.

Save your changes.

The above settings are to be done in the Integration Engine Configuration. Use Transaction SXMB_ADM -> Specific Configuration.

EAI Playbook 50

Page 56: 212458274 SAP Integration Playbook V1

Continues Evolution

Defining retention periods

The system navigates to a screen where the retention periods you specified are represented by the corresponding configuration data according to the following table:

Figure 4-5: Retention period

Below is an example of defining retention period.

Figure 4-6: Categories of retention period

XML messages that do not have the status processed successfully remain in the database.

Defining interfaces

This is where you define interfaces so that you can archive their XML messages.

SXMB_ADM -> The system differentiates between sender and receiver interfaces at this point.

Figure 4-7: Defining interface for archiving

To include an interface in the list of interfaces displayed, enter the interface in the Name and Namespace fields, and choose Flag Interface for Archiving.

When we choose Flag Interface for Archiving, you can see your interface.

Figure 4-8: Choosing the interface

EAI Playbook 51

Page 57: 212458274 SAP Integration Playbook V1

Continues Evolution

Choose the interface and press retention period.

Figure 4-9: Defining retention period for an interface

We can specify retention period for the integration engine.

Figure 4-10: Define retention period

The configuration pertaining to archiving is done and now we have to schedule the archiving job and then read, write, and monitor the archived messages.

Setting up the physical path for archived messages

This can be set up through SPRO.

Figure 4-7: Physical path

Schedule archiving

To schedule archiving, follow this path:

SXMB_ADM -> Schedule Archiving -> Integration Engine — Archiving.

EAI Playbook 52

Page 58: 212458274 SAP Integration Playbook V1

Continues Evolution

Figure 4-8: Integration engine archiving

Start Date: The start time of the archiving action. We define the start time and maintain the parameters in the corresponding fields. If jobs are to be executed periodically, choose period values to enter the start values.

Figure 4-9: Integration engine archiving — Define

Displays a dialog box like below in which you maintain the general parameters like Output Device, Number of copies, and Number of pages. We can choose Properties to maintain additional properties of the spool request. You do not generally change these properties for each spool request. If you have not configured any printer, Enter LP01 or LOCL. These output types come with the system and just create spool, and the spool will be deleted based on the basis configuration.

Figure 4-10: Background print parameters

Schedule Archiving: Once we are done with scheduling and defining printer, we can Schedule Archiving or F8.

Job Overview: To display the scheduled job in the job overview, choose Job Overview.

EAI Playbook 53

Page 59: 212458274 SAP Integration Playbook V1

Continues Evolution

Archive Administration

The Archive Administration is used for reading, writing, deleting, and management of archived messages. There are a few ways to execute this transaction.

Transaction SXMB_ADM -> Schedule Archiving Job ->.

Enter transaction XMS_SARA will go directly.

Transaction SXMB_ADMIN -> Schedule Archiving Job ->.

Reading of archived messages

From Archive Administration -> Read -> Archive Administration:

Run Read Program.

Figure 4-11: Archiving administration

EAI Playbook 54

Page 60: 212458274 SAP Integration Playbook V1

Continues Evolution

When we execute, you will get a screen like below.

Figure 4-12: Select file to read — Achieved

Expand any one of the above Sessions and Files and press Enter. We will see the messages that were archived and deleted on that particular date.

Figure 4-13: View message

EAI Playbook 55

Page 61: 212458274 SAP Integration Playbook V1

Continues Evolution

To see the messages, follow the steps below:

Select messages

And press. It will take XML message versions.

Management

From Archive Administration ->

When we explore management, we can see Sessions, Files Archiving Objects, Job Details, Physical File Name, Logical Path, and File Name, Size and the Status.

Example of archived messages:

Figure 4-14: Al11 View of archived messages

Statistics

From Archive Administration ->

When we press statistics, then we can find statistics about space saved in MB.

Figure 4-15: Display statistics for archiving

And also, we can see the covered database space in DB12.

EAI Playbook 56

Page 62: 212458274 SAP Integration Playbook V1

Continues Evolution

History entries

History entries are spots for observing message processing. They are generated by persisting at the end of a processing step for a message and contain the current status and the execution date.

History entries remain in the database for an unspecified length of time and must be deleted at some stage so that the database table does not overflow. The deletion process only deletes history entries for messages that have already been archived or deleted. The history entry is kept in the database for at least seven days after a message has been deleted. This is necessary since history entries are also required for the quality of service Exactly Once. Deleting these history entries are taken care by standard housekeeping job SAP_BC_XMB_HIST_DELETE_<client>.

This job can be scheduled through SXMB_ADM->Schedule Deletion jobs.

Figure 4-16: Schedule delete jobs

Figure 4-17: Delete jobs for XML messages and history entries

EAI Playbook 57

Page 63: 212458274 SAP Integration Playbook V1

Continues Evolution

Archiving

Adapter framework

The adapter framework automatically deletes the correctly processed XML messages. Asynchronous messages are persisted in database and are automatically flushed out after their storage time has expired.

For monitoring purpose, synchronous messages are persisted in the message store for a specified time. Since these messages are held in memory, we should not set this value too high. If there is a high load of synchronous messages, it can result into OutOfMemory errors.

The default settings are:

Successfully processed synchronous message are set to five minutes.

Successfully processed asynchronous messages are set to 30 days by default.

The delete job is started with the message store and is triggered periodically. It is triggered once a day by default (1,440 minutes).

There are two kinds of archiving adapter messages:

Signed messages

Unsigned/normal messages

Signed message contains additional information like nonrepudiation signatures from the parties. Other than this, signed and normal messages are the same.

Signed messages may need to be archived for legal reasons. Both types of messages reside in the same local database of adapter framework. Thus, separate archiving process is needed for signed messages.

Procedure

Log into the client of the IS and call transaction SXMB_IFR. This will bring up a browser window with a link to the RWB.

Once logged in, go to the ‘Component Monitoring,’ choose ‘All’ from the dropdown menu of the field “Component with Status,” and hit the “Display” button.

Figure 5-11: Adapter archiving

EAI Playbook 58

Page 64: 212458274 SAP Integration Playbook V1

Continues Evolution

In the list of components, mark the radio button of the Adapter Engine <hostname>.

Figure 5-12: Background processing

In the tool bar below, click the Background Processing button (SP 14).

Here, you will find a list of background processing running in your system.

Check the status of the jobs and, if necessary, the logs (on the Log Tab); in this context, jobs of job type archiving and deletion are important.

Figure 5-13: Create adapter archiving

Repeat the checks also for every noncentral adapter engine.

EAI Playbook 59

Page 65: 212458274 SAP Integration Playbook V1

Continues Evolution

Remarks

Messages with S/MIME security settings can be archived by using:

Runtime Workbench -> Component monitoring

Display all components.

Choose an Adapter Framework -> Security Archiving.

PI BPE Archiving.

Procedure:

Log into the client of the IS and call transaction SE16.

Enter the table name SWWWIHEAD/SWW_CONT in the “Table Name” field.

Push the “Table Contents (F7)” button, then the “Data browser: Table <table name>: Selection Screen” page will appear.

Click the “Number of Entries” button, then you will get the entry numbers. If this number is too large, you need to delete the completed WORKITEMS.

In transaction SE38, you may execute the Report RSWWWIDE_TOPLEVEL, which allows you to delete the completed BPE WORKITEMS manually.

System maintenance action plan

TABLE responsible Frequency

SWWWIHEAD Daily (if you have heavy BPE load)

SWW_CONT Daily (if you have heavy BPE load)

Different archiving/delete-related reports at a glance

RSXMB_ARCHIVE_MESSAGES archive XML messages.

RSXMB_CANCEL_NOT_REST_MESSAGES cancel XI messages with errors that cannot be restarted.

RSXMB_DELETE_ARCHIVED_MESSAGES delete archived XML messages.

RSXMB_DELETE_MESSAGES delete XML messages from the persistency layer marked for DEL.

RSXMB_CANCEL_MESSAGES helps in mass cancellation of error messages — XI.

RSXMB_SHOW_REORG_STATUS and RSXMB_SHOW_STATUS provide an overview for all XML messages in XI persistency layer.

RSXMB_MESSAGE_STATISTICS2 provides the processing statistics. This report is basically analyzed in the history table.

RXWF_XI_UNSED_MSGS report helps in converting message states other than 003 to final state, i.e., 003.

SXMB_REFRESH_ADAPTER_STATUS report helps in converting message states like 001/008 into final state.

RSXMB_DEL_TO_ARCHIVE changes the flag values from DEL to ARCHIVE in master table.

EAI Playbook 60

Page 66: 212458274 SAP Integration Playbook V1

Continues Evolution

Design — PI application transport strategy

Purpose

This document provides the guidelines to handle the PI transports. It also drives the consultants to adhere to specific naming convention while creating transport requests.

Scope

The scope of this document is to provide the strategy for PI transports.

Audience

This document is written for all personnel directly involved with the design, configuration, and ongoing activities of interfaces developed within the SAP PI environment.

Transport method used: CTS+

SAP NetWeaver PI contains enhanced functionality that enables close coupling between the PI development tools (ESR and ID) and CTS (Change and Transport System), and enhanced administration support during configuration phase of TMS supporting the system maintenance of combined Java and ABAP systems.

The following graphic describes the important CTS+ components and their distribution to the PI systems for a typical three-system landscape:

Advantages:

1. Simplified transport management to avoid inconsistent system states.

2. Connect Java Systems to standard CTS.

3. Combined transport requests for mixed sets of objects (ABAP, Java, PI objects…, etc.).

4. Central administration of all transports (ABAP, Java, PI objects, EP Objects…, etc.) in one user interface.

5. Logs clearly show the state of each request for developers and administrators.

6. Imports are shown in the CTS+ system with all other requests.

7. Imports can be triggered, scheduled, and monitored centrally.

EAI Playbook 61

Page 67: 212458274 SAP Integration Playbook V1

Continues Evolution

Prerequisites:

CTS+ has been configured to route the transports to the required target system.

All the PI-related development for the interface is freezed in DEV.

Unused ESR and ID components of the interface are deleted.

The interface is Functional Unit test (FUT) passed.

All the required UNIX OS level directories and user IDs and passwords are created in the target system.

Transport methodology

System Landscape Directory (SLD):

Software catalog:

– Using the ‘Export’ option of SLD, create the transport request and assign the required custom

products and software component versions to the request.

– The transport requests can be released from the Transport Organizer of ABAP stack or from

Transport Organizer Web User Inteface (UI).

System catalog:

– Using the ‘Export’ option of SLD, create the transport request and assign the required technical and

business systems to the request.

– The transport requests can be released from the Transport Organizer of ABAP stack or from

Transport Organizer Web UI.

– In the target system SLD, create source and target business system groups and assign

corresponding business systems to the group.

– For each source business system, add the corresponding target business system.

Points to remember:

1. The ownership of the above tasks will be with the BASIS team.

2. In the target system, all the technical systems should exist prior to transporting business systems.

3. Similarly, all the product versions should exist in the target system prior to transporting software component versions.

4. First, export the software catalog components and then the system catalog components.

5. Make sure to add the different export components to the transport request in the correct sequence. First, add the components that are a prerequisite for the import of the other.

6. In the target system, the import is triggered manually or automatically by the system administrator.

ESR and ID:

Every interface scenario will have only one request created with all the ESR components and the corresponding ID components.

EAI Playbook 62

Page 68: 212458274 SAP Integration Playbook V1

Continues Evolution

Creating the transports:

The transport requests for the components can be created using the following steps:

1. Select the components that need to be transported and choose menu option, Tools -> Export Design/Configuration Objects.

2. Select ‘Transport Using CTS’ as the transport mode, enter a description, and choose ‘Continue.’

3. The system now displays a proposed transport request (either a newly created one or an existing standard request for your user).

4. If a different request needs to be used, click on the link to access the Transport Organizer Web UI to create a new transport. Refresh the transport selection in your PI transport wizard to show the new transport request.

5. Choose ‘Finish’ to create the request and assign the selected components.

Releasing the transports:

1. The transport requests can be released from the Transport Organizer of ABAP stack or from the Transport Organizer Web UI.

2. After releasing the transports, wait for some time and refresh the status and switch to your released transport to check the export log.

Searching the transports created:

1. From ‘Tools’ menu, choose ‘Find Transports.’

2. Choose ‘Advanced’ in the search screen and select ‘CTS Transport’ as the transport type and search for transports.

3. Select one transport line to display its status information. To display details (Attributes tab page) and the contained objects (Transported Objects tab page), double-click an entry.

4. From the transport display, the Transport Organizer Web UI can be opened to check the transport status.

Checking the transports in target system:

After the CTS import of the transport to the target system has been executed either automatically or manually by the transport administrator, check the result of the import in the target system.

1. From ‘Tools’ menu, choose ‘Find Transports.’

2. Choose ‘Advanced’ in the search screen and select ‘CTS Transport’ as the transport type and search for transports.

3. Select one transport line to display its status information. To display details (Attributes tab page) and the contained objects (Transported Objects tab page), double-click an entry.

EAI Playbook 63

Page 69: 212458274 SAP Integration Playbook V1

Continues Evolution

Points to remember:

1. The ownership of the above tasks will be with the PI development team.

2. All design and configuration objects in the respective change lists required for a particular interface scenario should be activated.

3. The unused objects in the scenario should be deleted.

4. In ESR, while creating transport request for a scenario, make sure to include the required ‘Imported Objects’ if any used in the scenario.

5. The different export components to the transport request should be added in the correct sequence. First, add the components that are a prerequisite for the import of the other.

Transport naming convention

Please provide the transport request description as below:

For SLD components:

Description of components: SLD

Example:

DEV Business Systems: SLD

For ESR and ID Components:

FRICEW Object ID: <CR# XX>: <Description of the object/CR>: <ESR&ID/ESR/ID>

Example 1:

Object ID: Prepaid Freight: ESR&ID

Example 2:

Object ID: CR# XX: Sales Report Extract–Mail Subject Change: ESR

Design — Middleware/system sizing process

The sizing can be done using the Quick Sizer tool. Quick Sizer is a Web-based tool designed to make the sizing of SAP business suite easier and faster. It has been developed by SAP in close cooperation with all platform partners and is free of cost.

Quick Sizer calculates Central Processing Unit (CPU), disk, memory, and Input/Output (I/O) resource categories based on throughput numbers and the number of users working with the different SAP solutions in a hardware- and database-independent format.

The Quick Sizer tool can be accessed at:

http://service.sap.com/quicksizing

Development — Development standards and naming convention

Purpose

This document describes the naming standards for the SAP PI implementation. This document is categorized into three parts —SLD, ESR, and ID.

EAI Playbook 64

Page 70: 212458274 SAP Integration Playbook V1

Continues Evolution

SLD provides the overview and naming standards of its components like Software Catalog, Technical Systems, and Business Systems.

ESR provides the overview and naming standards of its components like Namespaces, Business Scenarios, Interface Objects, Mapping Objects…, etc.

ID provides the overview and naming standards of its components like Logical Routing, Collaboration Profiles, Collaboration Agreements…, etc.

The collection of naming standards presented here is a combination of best practices derived from other PI implementations and SAP naming standards recommendations.

Scope

The scope of this document is to provide naming standards for the PI objects.

Audience

This document is written for all personnel directly involved with the design, implementation, and ongoing operation of interfaces developed within the SAP PI environment.

Guiding principles and best practices

All objects will have functional/technical specifications. No object will be worked on without a specification.

Please use the naming standards in this document for all development requirements.

All objects that do not conform to the naming standards in this document will not be transported into production.

Support for shorter redeployment cycles and reconfiguration via loose coupling of interface components.

Minimize design and development time, and maximize reuse.

Must have centralized control and monitoring of end-to-end transaction progress.

Wherever possible, utilize common services during the design and development of the interface components.

SLD

The computing environment consists of a number of hardware and software components that depend on each other with regard to installation, software updates, and demands on interfaces. The SAP SLD simplifies the administration of the system landscape.

The SLD contains component information, a landscape description, and a name reservation, which are based on the standard Common Information Model (CIM). The CIM standard is a general schema for describing the elements in a system landscape. This standard is independent of any implementation.

The component information provides information about all available SAP software modules. This includes version numbers, current patch level, and dependencies between landscape components. It is also possible to add instances for third-party components to the component description.

The system landscape description represents the exact model of an actual system landscape. Together with the current component information, the system description provides information for various processes (the system administration and implementation, for example).

EAI Playbook 65

Page 71: 212458274 SAP Integration Playbook V1

Continues Evolution

The major components of SLD are:

Software catalog

Component information regarding SAP solutions and their possible combinations and dependencies is already delivered by SAP. The software catalog can be enhanced with non-SAP products by the customer. Information on installed landscape elements, including software packages installed on the various application systems in the landscape, is contained within the SLD. This information is utilized during the design and configuration of integration scenarios with PI.

Naming standards

Product

Custom products to be defined for PI development.

Vendor: While creating the product, the vendor (owner) of the product should be specified. Usually, it will be the Uniform Resource Locator (URL) of the company.

E.g., abc.com.

Product name: Name of the product to be created. Usually, the product will be defined corresponding to the functional area.

E.g., ORDER_MANAGEMENT, FINANCE…, etc.

For example, all finance-related interface scenarios should be placed under the product name FINANCE.

Version: Version of the product.

E.g., 1.0, 1.1..., etc.

Software component

Custom software components to be defined for PI development. Each software component will be associated with a product.

Vendor: While creating the software component, the vendor (owner) of the product should be specified. Usually, it will be the URL of the company.

E.g., abc.com.

Name: Name of the software component to be created. Usually, the software component will be defined corresponding to the subprocess of the functional area.

E.g., ACCOUNTS_RECEIVABLE, GENERAL_LEDGER…, etc.

For example, the FINANCE product contains the software components ACCOUNTS_RECEIVABLE and GENERAL_LEDGER.

Version: Version of the software component.

E.g., 1.0, 1.1..., etc.

Technical systems

If the automatic registration of SAP systems is set up (data supplier bridge within SLD and data supplier within connected SAP systems), only technical systems of third-party products will need to be physically maintained by the SLD organizers; otherwise, all technical systems information need to be maintained manually. Our

EAI Playbook 66

Page 72: 212458274 SAP Integration Playbook V1

Continues Evolution

recommendation is to always set up automatic registration of SAP systems since it minimizes the errors and work effort.

Naming standards

For SAP instances:

<SAP SID>

For technical systems defined for a SAP instance, utilize the SAP SID as the naming standard.

E.g., DE1 and DX1

For non-SAP instances:

<System Name>[<Description>]

<System Name>:

Application name of the system (E.g., ORACLE).

Recommended not to use hostname.

[_Description] (Optional).

An additional free-form description can be used if the system name does not guarantee uniqueness for the definition of technical system.

E.g., ORACLE_National.

Business systems

Business systems are logical systems used in scenario configurations in which they act as senders or receivers of messages. Every business system is associated with a technical system.

For example, every client of SAP system will be represented as a business system. There will be no difference between technical and business systems in case of non-SAP systems.

Naming standards

For SAP instances:

<Technical System Name>CLNT<Client>

<Technical System Name>:

Name of the technical system.

<Client>:

Client number.

E.g., DE1CLNT100.

For non-SAP instances:

<Technical System Name>[_Description].

<Technical System Name>:

Name of the technical system (E.g., ORACLE).

EAI Playbook 67

Page 73: 212458274 SAP Integration Playbook V1

Continues Evolution

[_Description] (Optional):

An additional free-form description can be used if the system name does not guarantee uniqueness for the definition of technical system.

E.g., ORACLE_National.

ESR

In the ESR, we define namespaces, business scenarios and processes, messages, and interfaces (interface objects) and mapping programs (mapping objects).

These objects are created using the editors provided by PI and these objects internally generate Web standard languages and protocols, such as BPEL, WSDL, and XML Schema Definition Language (XSD).

The different components of ESR are:

Namespaces of software component versions

A namespace uniquely identifies a set of names so that there is no ambiguity when objects have different origins, but the same names are mixed together. Within the ESR, we can group PI development objects from one software component version into logical units using namespaces. A software component version can have multiple namespaces. These namespaces provide additional grouping and organization logic for the interface components.

Naming standards:

http://<company>/<functional area>/<scenario description>

Note: The elements in the namespace should be in lowercase, except for <scenario description>.

<company>: URL of company.

E.g., abc.com.

<functional area>: Functional area of the interface.

E.g., Purchasing.

<scenario description>: Description of interface.

E.g., PurchaseOrderCreate.

Namespace Ex: http://abc.com/Purchasing/PurchaseOrderCreate.

Business scenarios and business processes

The use of business scenarios enables us to define the message exchange and process flow (actions) for collaborative processes.

A business process is an executable, cross-system process that is implemented by BPM.

Naming standards:

Business scenarios (PI scenarios):

PIScen_<AaaaBbbbCccc>

EAI Playbook 68

Page 74: 212458274 SAP Integration Playbook V1

Continues Evolution

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching between uppercase and lowercase characters.

E.g., PIScen_Procure_to_Pay .

Actions

AC_<AaaaBbbbCccc>

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching between uppercase and lowercase characters.

Ex: AC_CreatePurchaseOrder.

Business process (integration process)

IP_<AaaaBbbbCccc>

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching between uppercase and lowercase characters.

E.g., IP_CololectPOIDOC.

Interface objects

Interface objects combine the following:

Service interface (message interface) is used to describe a platform-independent or programming language-independent interface, which is used to exchange messages between application components within PI.

Service operations are entities that perform specific tasks on a business object, e.g., creating, updating, or deleting a business object. The operation is related to an action applied to an object (or a method), or an event triggered or received.

Message type comprises of a data type that describes the structure of a message.

Data type is a basic unit for defining the structure of the data for a message type. A data type can be created in a nested way by referencing data types from a complex data type. A data type enhancement can be defined on the basis of the corresponding standard SAP data type.

External definitions are not a specific type of an interface object. The structure of these objects is not created manually, but is loaded into the integration builder by uploading a file.

Context objects are pointers to a specific element (field) within the message, for future reference. They help encapsulate access to data that is contained in the payload or in the header of a message.

Naming standards:

Service interface

SI_<AaaaBbbbCccc>_<interface type>

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching between uppercase and lowercase characters.

EAI Playbook 69

Page 75: 212458274 SAP Integration Playbook V1

Continues Evolution

<interface type>: Should be one of the following:

IB for inbound.

OB for outbound.

AB for abstract.

E.g., SI_PurchaseOrder_IB.

Service operation

<AaaaBbbbCccc>_<comm. type>

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching between uppercase and lowercase characters.

<comm. type>: Should be one of the following:

Async for asynchronous.

Sync for synchronous.

E.g., SalesOrderCreate_Async.

Note: For stateless XI 3.0 compatible interface pattern, the name of the service interface and operation must be identical.

Message type

MT_<AaaaBbbbCccc>

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching between uppercase and lowercase characters.

E.g., MT_PurchaseOrder.

Data type

DT_<AaaaBbbbCccc>

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching between uppercase and lowercase characters.

E.g., DT_PurchaseOrderData.

Data type enhancement

DTE_<Original Data Type Name>_<Enhancement Description>

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching between uppercase and lowercase characters.

E.g., DTE_PurchaseOrderData_Plant1000Extn.

External definitions

ED_<AaaaBbbbCccc>

EAI Playbook 70

Page 76: 212458274 SAP Integration Playbook V1

Continues Evolution

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching between uppercase and lowercase characters.

E.g., ED_SalesOrderXSD.

Context objects

CO_<AaaaBbbbCccc>

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching between uppercase and lowercase characters.

E.g., CO_PONumber.

Mapping objects

Mapping objects are the ones used in the creation of data translation maps. This includes the various mapping technologies that can potentially be utilized within the PI landscape. These contain the following:

Message mappings used to define mapping rules between source and target message types. These can be graphical, Java, or eXtensible Stylesheet Language (XSLT). Mapping templates can be created, which can be reused (for creating mappings) across the interface scenarios having similar message type structures.

Operation mappings (interface mappings) are used to assign mapping program(s) between source and target message types. Different mapping programs can be combined in sequence. For synchronous interfaces, a request and response mapping can be provided.

Imported archives are for importing externally defined programs into the ESR. These can be of type XSLT or Java. All files to be imported should be in Java Archive (JAR) format.

Naming standards:

Operation mappings

OM_<Source Msg Int. Name>_to_<Target Msg Int. Name>

<Source Msg Int. Name>: Message interface defined for source system.

<Target Msg Int. Name>: Message interface defined for target system.

E.g.,: OM_ORDERS_ORDERS05_to_SI_PurchaseOrder_IB_Async.

Message mappings

MM_<source message type>_to_<target message type>

<source message type>: Message type defined for source system.

<target message type>: Message type defined for target system.

E.g., MM_ORDERS_ORDERS05_to_MT_PurchaseOrder.

Mapping template

MTMP_<source message type>_to_<target message type>

EAI Playbook 71

Page 77: 212458274 SAP Integration Playbook V1

Continues Evolution

<source message type>: Message type defined for source system.

<target message type>: Message type defined for target system.

E.g., MTMP_ ORDERS_ORDERS05_to_MT_PurchaseOrder.

Imported archives

IA_<source msg type>_to_<target msg type>_<Map Type>

<source message type>: Message type defined for source system.

<target message type>: Message type defined for target system.

<Map Type>: Should be one of the following:

XSLT

Java

E.g., IA_ ORDERS_ORDERS05_to_MT_PurchaseOrder_XSLT.

Miscellaneous

Function library

Function library groups a set of user-defined functions created that can be reused across the mappings of the same software component versions.

Naming standard:

FL_<AaaaBbbbCccc>

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching between uppercase and lowercase characters.

E.g., FL_Orders.

Parameters

Parameters are placeholders to provide any value dynamically.

Naming standard:

P_<AaaaBbbbCccc>

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching between uppercase and lowercase characters.

E.g., P_CommChannelName.

Graphical variables

Store intermediate mapping results and reuse in multiple target field mappings.

Naming standard:

GV_<AaaaBbbbCccc>

EAI Playbook 72

Page 78: 212458274 SAP Integration Playbook V1

Continues Evolution

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching between uppercase and lowercase characters.

E.g., GV_PONumber.

Business objects

To define the business objects used in the interface scenarios.

Naming standard:

BO_<AaaaBbbbCccc>

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching between uppercase and lowercase characters.

E.g., BO_SalesOrder.

Alert category

To define the categories for the alerts used in the interface scenarios.

Naming standard:

ACAT_<AaaaBbbbCccc>

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching between uppercase and lowercase characters.

E.g., ACAT_CommunicationAlerts.

ID

The goal of the ID is to configure the sender-receiver relationships, transport mechanisms, and security requirements that will be used at runtime.

The prerequisites for configuring an interface scenario in the ID are that the internal system landscape is implemented in the SLD and the relevant design objects are defined in the ESR.

In simple terms, interface scenarios are developed in ESR, configured in the ID, and executed in runtime by the IS.

The different components of ID are:

Logical routing

Logical routing relates the message interfaces defined in the repository to potential senders/receivers. The primary building blocks are interface determination and receiver determination.

Naming standard:

No naming standards are required for logical routing objects since these are automatically generated by PI server.

EAI Playbook 73

Page 79: 212458274 SAP Integration Playbook V1

Continues Evolution

Collaboration agreements

Collaboration agreements define the technical details for message processing. The primary building blocks are sender agreements and receiver agreements.

Naming standard:

No naming standards are required for collaboration agreement objects since these are automatically generated by PI server.

Collaboration profiles (parties and communication components)

In the collaboration profile, the technical options available to the communication parties for exchanging messages are documented. Here, the potential senders/receivers of messages and technical communication paths are specified. The primary building blocks are parties, services, and communication channels.

Party

This will be used if a larger unit (e.g., Company) is involved in a cross-system process.

Naming standard:

Party_<AaaaBbbbCccc>

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching between uppercase and lowercase characters.

E.g., Party_ABC.

Business component (business service)

If the parties involved have published only their interfaces and not their system landscape, using a business component, we can define the technical or business subunits of the companies involved and then assign them to relevant interfaces.

Naming standard:

BusComp_<AaaaBbbbCccc>

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching between uppercase and lowercase characters.

E.g., BusComp_ManuFileServer.

Business process (integration process)

A business process defined in the ESR is recipients of messages (for process step triggering) or can send messages. They are considered as a service and modeled here.

Naming standard:

BPM_<AaaaBbbbCccc>

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching between uppercase and lowercase characters.

E.g., BPM_CollectPOIDOC.

EAI Playbook 74

Page 80: 212458274 SAP Integration Playbook V1

Continues Evolution

Business systems

Business systems defined in the SLD are imported to the directory and used as sender/receiver systems.

Naming standard:

Since these are defined in SLD and imported in ID, no naming standards are required.

Miscellaneous

Configuration scenario

All configuration objects related to an interface scenario are grouped together by configuration scenario. While creating the configuration objects, each of them should be assigned to a configuration scenario.

Naming standard:

CS_<Interface Name>

<Interface Name>: Interface name taken from FRICEW specification. Recommended to use FRICEW Interface Specification File Name.

E.g., CS_SalesOrderCreate.

Communication channel

A communication channel specifies the transport mechanism for a message — the adapter type and adapter configuration parameters. A communication channel should be assigned to a business system or business component. Depending on whether the business system or business component is addressed as a sender or receiver of messages, the assigned communication channel has the role of either a sender or a receiver channel, and must be configured accordingly.

Naming standard:

CC_<direction>_<adapter type>_<AaaaBbbbCccc>

<direction>: Should be one of the following:

In — for receiver.

Out — for sender.

<adapter type>: Type of the adapter used.

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching between uppercase and lowercase characters.

E.g., CC_In_File_PurchOrd.

Folders

Folders provide an additional option for the logical grouping of objects.

Naming standard:

<Interface Name>

EAI Playbook 75

Page 81: 212458274 SAP Integration Playbook V1

Continues Evolution

<Interface Name>: Interface name taken from FRICEW specification. Recommended to use FRICEW Interface Specification File Name.

E.g., SalesOrderCreate.

Value mapping group

This defines the groups used in value mapping for interface scenarios.

Naming standard:

VMG_<AaaaBbbbCccc>

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching between uppercase and lowercase characters.

E.g., VMG_ReceiverList.

Summary

SLD

Vendor E.g., abc.com

Product name Ex: ORDER_MANAGEMENT, FINANCE…, etc.

Software component Ex: ACCOUNTS_RECEIVABLE, GENERAL_LEDGER…, etc.

Technical systems For SAP instances:

<SAP SID>

For technical systems defined for a SAP instance, utilize the SAP SID as the naming standard.

Ex: DE1, DX1…, etc.

For non-SAP instances:

<System Name>[_<Description>]

<System Name>:

Application name of the system (E.g., ORACLE).

Recommended not to use hostname.

[_Description] (Optional):

An additional free-form description can be used if the system name does not guarantee uniqueness for the definition of technical system.

Ex: ORACLE_National.

Business systems For SAP instances:

<Technical System Name>CLNT<Client>

<Technical System Name>:

Name of the technical system.

<Client>:

Client Number

Ex: DE1CLNT100.

For non-SAP instances:

<Technical System Name> [_Description]

<Technical System Name>:

EAI Playbook 76

Page 82: 212458274 SAP Integration Playbook V1

Continues Evolution

SLD

Name of the technical system (E.g., ORACLE).

[_Description] (Optional):

An additional free-form description can be used if the system name does not guarantee uniqueness for the definition of technical system.

Ex: ORACLE_National.

ESR

Namespace of software component versions

http://<company>/<functional area>/<scenario description>

Note: The elements in the namespace should be in lowercase, except for <scenario description>.

<company>: URL of company.

E.g., abc.com.

<functional area>: Functional area of the interface.

Ex: Purchasing.

<scenario description>: Description of interface.

Ex: PurchaseOrderCreate.

Namespace:

Ex: http://abc.com/Purchasing/PurchaseOrderCreate.

Business scenarios Business scenarios (PI scenarios)

PIScen_<AaaaBbbbCccc>

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching between uppercase and lowercase characters.

Ex: PIScen_Procure_to_Pay.

Actions Actions

AC_<AaaaBbbbCccc>

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching between uppercase and lowercase characters.

Ex: AC_CreatePurchaseOrder.

Business process Business process (integration process)

IP_<AaaaBbbbCccc>

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching between uppercase and lowercase characters.

Ex: IP_CololectPOIDOC.

Service interface SI_<AaaaBbbbCccc>_<interface type>

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching between uppercase and lowercase characters.

<interface type>: Should be one of the following:

IB for inbound.

OB for outbound.

AB for abstract.

Ex: SI_PurchaseOrder_IB.

Service operation <AaaaBbbbCccc>_<comm. type>

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching

EAI Playbook 77

Page 83: 212458274 SAP Integration Playbook V1

Continues Evolution

SLD

between uppercase and lowercase characters.

<comm. type>: Should be one of the following:

Async forasynchronous.

Sync for synchronous.

Ex: SalesOrderCreate_Async.

Note: For stateless XI 3.0 compatible interface pattern, the name of the service interface and operation must be identical.

Message type MT_<AaaaBbbbCccc>

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching between uppercase and lowercase characters.

Ex: MT_PurchaseOrder.

Data type DT_<AaaaBbbbCccc>

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching between uppercase and lowercase characters.

Ex: DT_PurchaseOrderData.

Data type enhancement DTE_<Original Data Type Name>_<Enhancement Description>

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching between uppercase and lowercase characters.

Ex: DTE_PurchaseOrderData_Plant1000Extn.

External definitions ED_<AaaaBbbbCccc>

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching between uppercase and lowercase characters.

Ex: ED_SalesOrderXSD.

Context objects CO_<AaaaBbbbCccc>

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching between uppercase and lowercase characters.

Ex: CO_PONumber.

Operation mappings OM_<Source Msg Int. Name>_to_<Target Msg Int. Name>

<Source Msg Int. Name>: Message interface defined for source system.

<Target Msg Int. Name>: Message interface defined for target system.

Ex: OM_ORDERS_ORDERS05_to_SI_PurchaseOrder_IB_Async.

Message mappings MM_<source message type>_to_<target message type>

<source message type>: Message type defined for source system.

<target message type>: Message type defined for target system.

Ex: MM_ORDERS_ORDERS05_to_MT_PurchaseOrder.

EAI Playbook 78

Page 84: 212458274 SAP Integration Playbook V1

Continues Evolution

SLD

Mapping template MTMP_<source message type>_to_<target message type>

<source message type>: Message type defined for source system.

<target message type>: Message type defined for target system.

Ex: MTMP_ ORDERS_ORDERS05_to_MT_PurchaseOrder.

Imported archives IA_<source msg type>_to_<target msg type>_<Map Type>

<source message type>: Message type defined for source system.

<target message type>: Message type defined for target system.

<Map Type>: Should be one of the following:

XSLT

Java

Ex: IA_ORDERS_ORDERS05_to_MT_PurchaseOrder_XSLT.

Function library FL_<AaaaBbbbCccc>

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching between uppercase and lowercase characters.

Ex: FL_Orders.

Parameters P_<AaaaBbbbCccc>

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching between uppercase and lowercase characters.

Ex: P_CommChannelName.

Graphical variables GV_<AaaaBbbbCccc>

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching between uppercase and lowercase characters.

Ex: GV_PONumber.

Business objects BO_<AaaaBbbbCccc>

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching between uppercase and lowercase characters.

Ex: BO_SalesOrder.

Alert category ACAT_<AaaaBbbbCccc>

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching between uppercase and lowercase characters.

Ex: ACAT_CommunicationAlerts.

ID

Party Party_<AaaaBbbbCccc>

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching between uppercase and lowercase characters.

Ex: Party_ABC.

EAI Playbook 79

Page 85: 212458274 SAP Integration Playbook V1

Continues Evolution

SLD

Business component BusComp_<AaaaBbbbCccc>

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching between uppercase and lowercase characters.

Ex: BusComp_ManuFileServer.

Business process (integration process)

BPM_<AaaaBbbbCccc>

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching between uppercase and lowercase characters.

Ex: BPM_CollectPOIDOC.

Business system Since these are defined in SLD and imported in ID, no naming standards are required.

Configuration scenario CS_<Interface Name>

<Interface Name>: Interface name taken from FRICEW specification. Recommended to use FRICEW Interface Specification File Name.

Ex: CS_SalesOrderCreate.

Communication channel CC_<direction>_<adapter type>_<AaaaBbbbCccc>

<direction>: Should be one of the following:

In — for receiver.

Out — for sender.

<adapter type>: Type of the adapter used.

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching between uppercase and lowercase characters.

Ex: CC_In_File_PurchOrd.

Folders <Interface Name>

<Interface Name>: Interface name taken from FRICEW specification. Recommended to use FRICEW Interface Specification File Name.

Ex: SalesOrderCreate.

Value mapping group VMG_<AaaaBbbbCccc>

<AaaaBbbbCccc>: Self-explanatory name with separate individual words by switching between uppercase and lowercase characters.

Ex: VMG_ReceiverList.

Development — Templates

EAI Playbook 80

Page 86: 212458274 SAP Integration Playbook V1

Continues Evolution

Development — Checklist

Code review checklist

Type of development (i.e., data conversion):

Object ID:

Code reviewer:

Date reviewed:

Date approved by development management/name of development manager:

All programs

Administrative data

Printout of program code attached .

Development class attached .

Authorization group assigned.

Version management .

Documentation

Header section of program contains CR number/development request number.

Header section of program contains name of the programmer .

Date on which program was created.

Name of the technical specification.

Author of the technical specification.

Program title specified.

Input file name specified, if used.

Name of the output file name, if being created.

A brief description of what the program is going to do.

Restrictions/assumptions, if any.

Data declarations adequately documented .

Variables and parameters adequately documented.

Function modules identified and described .

Called transactions/subroutines identified and described.

Use of ABAP template.

Changes history present .

Subroutine comment blocks present .

General points removal of ‘dead code’

Use of text elements.

Authorization object implemented .

Messages successfully issued to SLG1.

EAI Playbook 81

Page 87: 212458274 SAP Integration Playbook V1

Continues Evolution

Dequeue and Enqueue Function Modules used before making any changes to custom tables.

Locking when calling transactions handle multiple-user runs, i.e., interactive reports that call transactions at some point within the run should lock data at the beginning of the run.

List any outstanding optimization improvements that may be used to improve the performance of the program .

Data declaration

Data declarations follow standard naming convention.

Constant declared instead of hardcoding values.

Variables based on dictionary object.

No redundant data declarations.

All variables have read/write references (cross-reference).

Data validation Data entered by the user must be validated and handled properly if invalid. Note that when select options

or parameters are based on dictionary objects link to a check table, no extra validation needs to be done .

Database accesses

SELECT only the fields required.

Do not use SELECT/ENDSELECT.

Use of SELECT… INTO TABLE where appropriate.

Where clause used rather than CHECK.

SELECT… WHERE matching primary key or secondary index.

Use views or join selects instead of nested select statements.

SELECT SINGLE with fully qualified key.

SY-SUBRC check performed for all selects/reads.

Database updates

No direct database updates (unless custom-built tables).

Whenever possible, use array operations instead of single row — INSERT/UPDATE/DELETE… FROM TABLE…

CALL TRANSACTION return codes checked.

CALL TRANSACTION process mode and update mode as variables.

Backup Domain Controller (BDC) Session Report (number of transactions, sessions, etc.) .

Modular programming

No unnecessary duplication of code .

Sensible use of subroutines (single-purpose FORMs).

FORM parameters assigned a data type where possible.

Reusable code placed in function modules and documented.

SY-SUBRC checks and error handling on function module exceptions.

EAI Playbook 82

Page 88: 212458274 SAP Integration Playbook V1

Continues Evolution

Use of standard function modules for date manipulation, pop-ups, etc.

Processing data

On change is not used for processing control breaks in loops on internal tables; ON CHANGE OF is unsuitable for recognizing control levels in loops of this type because it always creates a global auxiliary field that is used to check for changes. This global auxiliary field is only changed in the relevant ON CHANGE OF statement. It is not reset when the processing enters loops or subroutines, so unwanted effects can occur if the loop or subroutine is executed again. Also, since it is set to its initial value when created (like any other field), any ON CHANGE OF processing will be executed after the first test, unless the contents of the field concerned happen to be identical to the initial value.

Use case statement instead of IF/ELSEIF/ELSEIF Make sure Case statement includes “when others.”

Sensible use of move corresponding statement .

Use loop at... where or read table with key instead of unnecesary complete loops .

BDC program

Input file validation done and error message displayed accordingly.

F4 help given in case user is not entering any file name.

Call transaction/BDC_SESSION method used as per the specification.

After a call transaction, are the error messages captured in an internal table?

Is SY-SUBRC EQ. 0 checked after a call transaction?

Refresh/clear BDCDATA done after a call transaction.

If BDC_SESSION method is used and if unable to open session using BDC_OPEN, appropriate message is given.

If BDC_SESSION method is used and if unable to insert using BDC_INSERT, appropriate message is given.

If BDC_SESSION method is used and if unable to close session using BDC_CLOSE, appropriate message is given.

Audit report displaying the records, which have been posted or not shown.

Dynpro program

Screen numbers created as per ABAP programming standards.

Validations on input screen elements done using chain/endchain.

PF status created as per the specification.

Help functionality exists for all screen fields.

SAP scripts

“NAST_PROTOCOL_UPDATE” function module used to display all messages .

All long texts are displayed on the SAP script using the Layout Include command.

PI

PI naming standards are followed. The naming conventions for PI are followed in the object.

Objects should be with status active. This will check for errors as well . The object is active with no errors.

EAI Playbook 83

Page 89: 212458274 SAP Integration Playbook V1

Continues Evolution

Namespace should be created for every object with correct naming standards.

IDocs and RFCs should be loaded into SAP Component SAP APPL.

Message interfaces, message types, data types, message mappings, and interface mappings should be created with the correct naming standards and under the appropriate Software Component Version (SWCV).

Message mappings should comply with rules given in the mapping sheet.

A test case should be maintained for the mapping program created.

For any User Defined Functions (UDFs) created, in-line documentation is maintained.

All the ID steps should be added to relevant configuration scenario.

Sender system, sender interface, and receivers are given as per the requirement for receiver determination.

In the interface determination, the receiver interface and the corresponding interface mapping are to be given.

All the required configuration parameters are to be maintained for the adapters created (both sender and receiver).

In case of IDocs, check if ports are maintained in IDX1 and the metadata is loaded into IDX2.

Error handling as per functional requirements are implemented.

Run procedure

General information

Program:

Transaction code:

Brief description of object (in R/3):

Selection criteria (in R/3):

On the screen, selection options appear for the following:

Execute .

System information.

R/3 system:

Client:

Outbound system.

Inbound system.

Selection screen.

Using the variant

Click the icon to use the variant.

The variant screen is displayed.

EAI Playbook 84

Page 90: 212458274 SAP Integration Playbook V1

Continues Evolution

Select the variant.

The selection screen is as follows:

Execute .

The output is as follows:

Development checklist

Type of development (i.e., data conversion):

Object ID:

Author of specification:

Reviewer of specification:

Date reviewed:

Date approved by development management/name of development manager:

Task More information

All programs

Authorization object(s) are specified

Program technical details are entered in development summary

Revised development estimate.

Standard SAP program usage Standard SAP program will be used.

BAPI will be used.

ALE will be used.

SAP function(s) will be used.

Specify reason for custom program.

Is a copy of an existing program required for modification? (specify reasons)

Name

Program Zskeleton has been copied as the template for the program (including standard report header) (relevant to all, except module pools and SAP script)

No data is changed/deleted from SAP tables using custom code

Conversion

Legacy System Migration Workbench (LSMW) will be used

LSMW naming conventions are used.

Translation tables Custom table will be created in data dictionary. First two columns are: ‘Site’ and ‘Application’ to ensure global design.

Global function will be created/used for the translation.

Use standard program to upload delimited file into SAP custom table.

Create and change mode for uploaded data will be implemented

Program runs in background mode using files

Directory paths for conversions are used.

Error-handling tools designed System failure during upload.

It is possible to correct failed transactions in bulk by a user (i.e., BDC sessions).

Audit report defined Clear audit of all errors.

Clear audit of all successfully uploaded data.

EAI Playbook 85

Page 91: 212458274 SAP Integration Playbook V1

Continues Evolution

Task More information

Interfaces

Legacy programming Is legacy programming required?

Legacy file formats and program requirements have been specified.

Legacy specification section is complete.

Development summary has been updated with legacy program name and developer name.

File transfer method Legacy to SAP automatic data transfer is required. (If not, how will data be transferred to and from SAP?)

External application file will appear in appropriate system directory.

SAP output files are output on appropriate system directory.

What file transfer tool is used? (Network File System/FTP/Direct Pick by ABAP Prog., etc.)

Data frequency and volume information is detailed

Volumes have been confirmed and development summary has been updated.

Translation tables Custom table will be created in data dictionary.

Global function will be created/used for the translation.

Audit/error monitoring SLG1 is used for all major audit milestones and error messages. (If not, why?)

SLG1 objects have been designed according to the functional specification.

Enhancements

Audit and error messages SLG1 is used for all major audit milestones and error messages. (If not, why?)

SLG1 objects have been designed according to the functional specification.

PI

Naming standards Check if the naming standards are being followed for all the PI objects. Refer to PI Naming Standards Document.

ESR Interfaces should be created only when required. All ths IDocs and RFCs should be underimported objects.

Technical Details section should list all the steps used for developing ESR objects.

Message interfaces should be specified along with the relevant message types (inbound and outbound). Message types should list the data types being used.

Interface mapping and message mapping should be checked with respect to interfaces being used and the mapping programs as well.

Mapping sheet with the rules mentioned in the spec is attached.

ID Technical Details section should list all the steps used for developing ID objects.

Check if the systems mentioned are maintained in SLD.

Key combinations and requirements should be maintained for receiver determination, interface determination, receiver agreement, and sender agreement.

All the configuration parameters are specified for the adapters.

Others Recovery strategy/restart strategies to be specified.

Error-handling and notification scenarios to be mentioned in the spec

test cases.

Workflow

Business flow diagram (flowchart) with all technical details

List of all tasks, including workflow template/standard task/custom task

List of business objects — Methods — Events

EAI Playbook 86

Page 92: 212458274 SAP Integration Playbook V1

Continues Evolution

Task More information

Role resolution for each dialog task

If function modules used for role resolution, then details of function modules

If configuration is required to do in SPRO transaction, then configuration details (class, release procedure, workflow triggering..., etc.)

Pseudo code giving complete details of relevant programs, transactions, event triggering, workflow handling, SAP documents posting, and any integration from workflow like (email to outlook)

HR Organization object: If used, then details regarding org. objects (org. unit, position…, etc.) along with usage

Error-handling scenarios

Development readiness checks

1. The following document details the steps to be carried out to perform system development readiness and connectivity checks:

2. Roles, responsibilities (authorizations), and skill sets in a PI environment:

Development status and templates

Usually the object development status will be tracked based on the FRICEW tracker. If the projects are proposed to use any proprietary tools the status can be downloaded directly. Sample FRICEW Tracker attached and Monthly Project Report attached. Some templates also can be accessed through Quest.

Production cutover

Plan

EAI Playbook 87

Page 93: 212458274 SAP Integration Playbook V1

Continues Evolution

Strategy

Introduction

The final phase of any SAP implementation is known as the Cutover Phase. Any organization implementing SAP or SAP PI would need to plan, prepare, and execute the cutover. This is done by means of finalizing a cutover strategy and a cutover plan. The execution of the cutover phase should be based on the cutover strategy and the plan.

Scope

A typical SAP PI cutover execution is realized with the coordination of various teams. These teams would typically be SAP security/SAP BASIS and SAP development and legacy system teams. The activities by all these teams need to happen in a defined/approved chronological pattern. This document details the roles and responsibilities of all such teams during the cutover phase.

Executive summary

The following is a bird’s-eye view of the responsibilities of the various teams:

SAP security — The SAP security team would be responsible to set up users on PI for the IT support roles and business support roles along with SAP basis/infrastructure users. They are also responsible for setting up service user accounts used in RFC destinations.

SAP basis/infrastructure — The SAP basis/infrastructure team is responsible for installing SAP PI software, create RFC destinations, configure integration engines and IS, transport management, CCMS setup/integration with central CCMS, setup housekeeping jobs, and configuration for high availability/disaster recovery. They are also responsible for setting tuning parameters along with loading of metadata.

SAP development — The role of SAP development during a cutover is to import/activate transports, validate transported objects, smoke testing, and configuring communication parameters for integrating applications along with the activation of communication channels as outlined in the cutover plan. The details on the tasks for each of the following teams are detailed in sections below.

Legacy teams — Following would be a few examples of the legacy-related tasks:

a. Promote interface-specific changes to the legacy environment, if necessary.

b. Perform data conversions, if any.

c. Perform business activities like repointing file destinations, renaming folders, etc.

SAP security tasks

The following are the core tasks performed by SAP security teams during a cutover or system build:

Create and transport support roles

EAI Playbook 88

Page 94: 212458274 SAP Integration Playbook V1

Continues Evolution

The following are the types of support roles tha security would need to create to support PI cutover:

Role type Teams using the role DescriptionABAP/Java key transactions

Infrastructure support role

Basis/infrastructure This is a super admin role; enables users to install PI components, monitor PI components, and configure IS.

SXMB_ADM (ABAP)

NWA (NetWeaver Administrator)

IT support role IT support/development This role enables the IT support/development team to edit communication channels. This is required to set up connectivity details like FTP server address/Web service destination.

SXMB_MONI

ID

RWB

Business support Business users [should this be PI support team]

This role enables business users to monitor PI messages.

RWB

Service/system user role

RFC users This is to set up RFC destinations. This role should enable system user to process messages through the PI pipeline.

These roles need to be transported to the production environment.

Assign users to roles

Once the role is transported into the production box, the respective users need to be assigned to the respective roles. These users need to be identified before the cutover process is kicked off.

SAP basis tasks

The following are the core tasks that need to be performed by SAP basis during the PI cutover phase:

Validate SLD content

The following validations need to be done in PI SLD by SAP basis:

Validate technical system

Technical systems are application systems that are installed in your system landscape. Example of an application system is a CRM server. In the SLD, there are four types of technical systems: SLD Web AS ABAP, Web AS Java, stand-alone Java, and third party.

All SAP systems that form the part of the PI integration landscape get automatically registered to the PI SLD. Examples of these technical systems are SAP ECC, SAP CRM, SAP SRM, etc. Basis needs to validate these technical systems.

Business systems

Business systems are logical systems that function as senders or receivers within PI. Business systems can be SAP systems or third-party systems.

Depending on the associated technical system, the following are the types of business systems that are defined in the SLD Web AS ABAP, Web AS Java, stand-alone Java, and third party:

These should be created based on the production clients in SAP PI SLD. Basis need to create and validate the business systems.

Validate transport groups and business system mapping.

Since the business system name in DEV, QA, and PRD are different in the landscape; the ID configuration defined in DEV will be invalid for QA or PRD. To avoid this situation, transport targets are

EAI Playbook 89

Page 95: 212458274 SAP Integration Playbook V1

Continues Evolution

used. By defining transport targets, the referred business systems are changed automatically and the configuration will remain valid.

To achieve the same, mappings and transport groups need to be defined in the SAP PI SLD. These mappings map the DEV and QA business systems to the PRD business systems.

Basis/Infrastructure validations

The following set of infrastructure validations needs to be performed by SAP basis:

Validate RZ10 Profile parameters of connecting SAP business systems

Validate Java VM parameters in Config tool

Validate exchange Profile parameters

Execute SLDCHECK in all Integration engines of connecting SAP business systems

Validate all JCO providers in PI System

Validate all RFC connections in SM59

Verify if all the required SSL certificates are installed and are valid through STRUST/NWA

Validate CCMS and Process Monitoring Infrastructure (PMI) monitoring are enabled

Validate Alert Configuration settings, and verify if CCMS and RWB Alert rules are properly configured

Validate the connections to TREX system if used and also with any scheduling software like Redwood

Validate if Web dispatcher and all dialog instances are up and running

Verify to see if the trace levels are set to minimum/required on ABAP and Java stacks

Mount shared file directories

Driver installation for JDBC/JMS or SOAP adapters

Basis/Infrastructure background jobs

The following set of background jobs need to be activated or scheduled:

SAP_BC_XMB_DELETE_<client> (deletion of XML messages, if not archived)

SAP_BC_XMB_HIST_DELETE_<client> (deletion of history entries)

ARV_BC_XMB_WRP<date> (archiving of XML messages, if not deleted)

ARV_BC_XMB_DEL<date> (deletion of archived XML messages, if not deleted)

SXMS_DELAYED_MSG_PROC<client> (job for delayed message processing, only if is used)

ARV_WORKITEM_WRP<date> (archiving of work items, only if ccBPM is used)

ARV_WORKITEM_DEL<date> (deletion of archived work items, only if ccBPM is used)

SWWERRE (restart of erroneous ccBPM process, only if ccBPM is used)

SWWDHEX (monitors deadlines of ccBPM processes, only if ccBPM is used)

SWWCLEAR (deletion of job logs, only if ccBPM is used)

RSXMB_RESTART_MESSAGES (automatic restart of erroneous, asynchronous messages)

RSWWWIDE (deletion of work items, if not archived, only for ccBPM)

RSWF_XI_INSTANCES_DELETE (deletion of archived work items, only if ccBPM is used)

SXMS_PF_REORG (reorganizes performance data)

EAI Playbook 90

Page 96: 212458274 SAP Integration Playbook V1

Continues Evolution

SXMS_PF_AGGREGATE (aggregates performance data)

SXMS_REFRESH_ADAPTER_STATUS (refresh of outbound adapter status, only if the IDoc adapter and BPE are used)

RSALERTPROC (deletion of alert and alert logs from database)

RSALERTPROC

SXMS_PF_REORG

RSXMB_DELETE_MESSAGES

SXMB_DELETE_HISTORY

XMS_PF_AGGREGATE

SAP PI instance/profile parameters

The following instance parameters can be set by the Basis/Infrastructure team:

Parameter Value Description

abap/arfcrstate_col_delete X Activates deletion of ARFCRSTATE records in background. Forces report RSTRFCEU to run in batch periodically every five minutes (see SAP Note 539917)

gw/max_conn 2000 Sets maximum number of active connections (Gateway)

gw/max_overflow_size 10000000 Sets size of local memory area for Gateway

icm/HTTP/max_request_size_KB 2097152 Maximum size of HTTP request accepted by ICM

rdisp/appc_ca_blk_no 2000 Sets TCP/IP communication buffer size

rdisp/force_sched_after_commit no Disables automatic rollout of context after commit work

rdisp/max_arq 2000 Maximum number of internal asynchronous messages

rdisp/max_comm._entries 2000 Sets maximum number of communication entries

rdisp/rfc_max_own_login 90 Sets RFC quota for own logins

rdisp/rfc_max_own_used_wp 90 Sets RFC quota for own-used work processes

Simple Mail Transfer Protocol (SMTP) and SCOT configurations

The SAP Office emails are configured through SMTP. The transactions used to configure the same are SOST and SCOT. SAP basis needs to configure and validate these settings.

Purge messages in PI

Before a PI box is handed over by BASIS for data loads or Interface execution, a validation needs to be performed by BASIS on the available messages on the PI box.

In case messages already exist on the PI database, the database needs to be purged and cleaned up.

Apply Archiving and Deletion settings

SAP basis needs to apply the message archiving and deletion setting on the both the PI ABAP and Java stacks.

The archiving parameters (e.g., duration to archive) is governed by business needs.

EAI Playbook 91

Page 97: 212458274 SAP Integration Playbook V1

Continues Evolution

Configure Integration Engine/Server

The Integration Engine is a runtime environment of SAP NetWeaver PI. The integration processes involved can take place between heterogeneous system components within a company as well as between business partners outside company boundaries.

As a PI runtime component, the Integration Engine has the task of receiving, processing, and forwarding XML messages. During message processing, collaboration agreements are evaluated, the receivers are determined, and mapping activities are executed.

The individual processing steps are called pipeline services and are defined in pipelines. It is the task of the Integration Engine to process these pipelines correctly and consistently.

To guarantee this, the Integration engines on various SAP systems, like SAP ECC, SAP CRM, and SAP SRM, need to be configured accordingly.

This is done via the transaction SXMB_ADM. The core parameters that need to be configured are:

Maintain ‘Role of Business System’ as Integration Engine.

Maintain Integration Engine URL.

Check Specific Configuration for categories ‘DELETION’/’MONITOR’/’PERF’/’RUNTIME’/’TUNING’/ - LOOK NOTE.

All custom software component versions are available in SLD.

Configure SXMB_ADMIN for Payload accessibility in SXMB_MONI.

Set up logging and trace levels.

Set up receiver/sender IDs via SXMSIF.

Set up parameterized IS URLs.

Maintain archiving and deletion Jobs.

Register queues in PI as well as satellite systems, like ECC, SRM, and CRM. Also, verify the queue priority settings, if used.

EAI Playbook 92

Page 98: 212458274 SAP Integration Playbook V1

Continues Evolution

Import transports

The final step to be executed by basis is to import transports into the PI builder or the ESRs.

Various mechanism used for the transports are CTS+, file based, etc.

Development team tasks

Activate Transports

Unlike ABAP transports, which get automatically activated when they are imported into a target system, PI transports specifically ID transports need to be activated manually as the communication details do not get transported. This is due to the reason that the communication details are different on each environment.

The ESR transports are similar to ABAP transports and do not need any manual activation.

Lets us take an example where CTS+ is used to manage transports.

To activate an ID transport, go to the target system Integration directoryToolsFind Transports. All the transports imported into the system would be displayed on to the right side.

EAI Playbook 93

Page 99: 212458274 SAP Integration Playbook V1

Continues Evolution

All the transports imported into the system would be under the transport user NWDI_CTSADM. To activate the transports, the required work list under user NWDI_CTSADM needs to be transferred on to the user activating them.

Configure communication details

The details that need to be maintained in the communication channels need to be collected prior to the cutover. This task is part of the system build activity. The following examples show the type of information required to be determined for common integration patterns:

EAI Playbook 94

Page 100: 212458274 SAP Integration Playbook V1

Continues Evolution

Examples:

Communication channel using a file adapter:

Details that are required would be the FTP folder/directory, FTP server/port details and the user name/password, system dependent OS commands, etc.

Communication channel using a JDBC adapter:

Details that are required would be the connection parameters, the user name/password, SQL statements, etc.

Deactivate and activate communication channels

To avoid any accidental transfer of data from/in to SAP prior to Go-live, all the channels are to be deactivated. This should be done as part of the configuring the communication details task under Section 6.2.

EAI Playbook 95

Page 101: 212458274 SAP Integration Playbook V1

Continues Evolution

The channels (interfaces) should remain inactive, unless a go-ahead is given as per the deployment plan.

This could be either done at the integration director level as shown below or the RWB.

ID setting: Most of the Java-based adapters have a setting within the communication channel configuration for activation or deactivation. This setting should be marked as inactive after the initial system build.

The RWB components provide the following options for channel deactivation:

Deactivation via manual control

Deactivation via External control

Validate partner profiles

Validate all the partner profiles in the source and the target systems of SAP using WE20 transaction. The inbound/outbound parameters along with message controls should be validated.

EAI Playbook 96

Page 102: 212458274 SAP Integration Playbook V1

Continues Evolution

Smoke/Connectivity testing

A Smoke test is a basic connectivity test that can be conducted in a newly build PI environment. This test is done without posting any data from SAP or into a SAP system and certifies connectivity configurations in PI.

The following are a set of sample patterns that can be used for smoke testing:

Connectivity Test Patterns Testing Process

SAP to PI using Proxy 1) Through validation of RFC connections.

2) Send out dummy message from ECC using transaction SPROXY and stopping the queue on target. The message needs to be archived/purged after the tests.

PI to SAP using Proxy Through validation of RFC connections.

SAP IDoc to PI Generate a dummy IDoc in ECC via transaction WE19. The message and IDoc needs to be archived after the test is successful. The queues/communication channels on PI should remain as inactive during the tests.

Third party to PI Java-based adapters like SOAP, JDBC, JMS, and FILE can be tested via the communication channel status on the RWB.

These tests are typically conducted by the development teams. It should also be noted that the smoke testing needs to be done in a very controlled environment. This is to avoid posting of any data in SAP or non-SAP system before the designated date.

EAI Playbook 97

Page 103: 212458274 SAP Integration Playbook V1

Continues Evolution

Continues Evolution

Interface monitoring approach and strategy

The capabilities for central monitoring with SAP NetWeaver PI are offered through the specific PI WorkCentre in SAP solution manager. Going forward, SAP solution manager will be the recommended platform and tooling for central monitoring of complex, distributed multidomain PI landscapes. The main central monitoring capabilities include overview monitor, message monitor, component monitor, channel monitor, all integrated with SAP solution manager E2E monitoring and alerting framework, and Information Technology Infrastructure Library (ITIL)-based processes for incident management and notification.

All these along with increased connectivity options, improved local monitoring, and fault tolerance lead to considerable reduction of total cost of operations for SAP NetWeaver PI 7.3.

Introduction

SAP is continuously introducing latest innovation on each and every product. As we already know that from SAP PI 7.3, we have the option to install stand-alone Java-only component for PI. Last week, SAP released SAP PI 7.4, which optimized for dealing with HANA DB. SAP already released the solution manager 7.1 on HANA, which is similar to SM 7.1 SP08.

As customer, we have been learning the new features added to PI 7.3 Monitoring and also identifying how those new features are getting impacted on solution manager 7.1 SP7. Through this blog, I could like to share what are the possibilities available on Central PI monitoring in solution manager.

Available from

Central PI monitoring (PIMON) is one of the new features from SM 7.1 SP 0 onwards (applicable for SAP PI 7.1 SP6). It is like other functionality in Technical monitoring WorkCentre, but from SAP PI SP 7.3, it has some more improvements. Entire enhancement in PI monitoring part of solution manager goes along with PI NW version. With PI 7.3, the entire PI monitoring in solution manager highly integrated with integration engine process monitoring. From PI 7.3, you can directly view the PI monitoring dashboards available in Technical WorkCentre of solution manager 7.1. You can get the details here monitoring — What’s New in SAP NetWeaver 7.3 (Release Notes) — SAP Library.

Time... is now!!

We are currently using PI monitoring from Runtime Workbench (RWB) and very few interface monitoring with respect to solution manager, but we felt it is the perfect time to get with central PI monitoring solutions available in SM 7.1 SP7.

RWB is one of the local monitor (other local monitors for PI are SXMB_MONI), which have been used in decades, its only help to monitor the individual PI domain or we can go ahead with typical NWA as CEN system based on Computing Center Management System (CCMS)/RFC-based monitors. These local monitors also have been enhanced from time to time for each release of PI. Can be reviewed some of the enhancement with screen shots here SAP PI 7.3 New Local Monitoring Tools

Though you can get lots of enhanced features in local monitors, but using RWB, we cannot monitor the message processed in new AEX (Java-only stand-alone installation option for SAP NetWeaver PI 7.3). In such

EAI Playbook 98

Page 104: 212458274 SAP Integration Playbook V1

Continues Evolution

systems, monitoring can be done via Central PIMON Solman, local NWA for PI. JAVA alone PI is very light application and moreover cost-saving approach instead of installing the entire NW PI, which is compatible with solution manager PIMON in SM 7.1., since Technical monitoring is purely based on Solution Manager Diagnostic (SMD) agents.

We do use BPMon Interface monitoring PI-related key figures. But again this is not covering other features like component and message monitoring.

Another reason we could go for Central PIMON in solution manager, instead of typical monitoring approach, it is monitored centrally in one place, can be direct to the root-level cause of the problem, or reporting by integration of other solution manager cabalists like RCA and ITSM.

If we look into the configuration perspective also, PIMON setup can be done via self-guided wizards, which is a better approach than NWA or BPMon, where we need to go for some steps manually.

And I could say all the issues that we faced with respect to the PI monitoring are already identified and corrected. Delivered in Wiki, PiMon_Home — Technical Operations — SCN Wiki. Couple of weeks before we successfully completed our PIMON monitoring setup with minimal effort.

Customers’ view

After the implementation, our customers are also delighted, and they commented that two things they liked with PIMON in solution manager, they are central dashboards and easy navigation approach.

 

Central PI monitoring available in solution manager helps to monitor components, channel, message, and in addition we can also have the message search scenarios. Using central user-defined message search — Technical Operations — SCN Wiki

EAI Playbook 99

Page 105: 212458274 SAP Integration Playbook V1

Continues Evolution

Pleasant Dashboard view of PI monitoring

Detail navigation for each nodes

EAI Playbook 100

Page 106: 212458274 SAP Integration Playbook V1

Continues Evolution

Channel monitoring navigation

Defect and change management

Change management is a vital process to control development life cycle of software, starting from product design and development to production deployment.

During product development phase and even after production go live, product has to undergo changes, enhancements, and bug fixes. Tracking of these changes is very important and crucial task for organizations. Correct, authorized, and properly documented changes needs to be implemented.

SAP ChaRM (CR Management) provides change management capabilities and enforces standardized approach to achieve these activities. SAP ChaRMis SAP solution manager-based tool, which provides tracking and documentation of all CR and transports for business solution development in an organization.

In this post, we will focus on different ChaRM approaches in SDLC.

A typical CR process is initiated by a business user via service help desk. Help desk assigns CR to respective manager where manager validates request to take further action.

Manager validates if request is valid.

Manager assures, CR is approved by business logically and financially.

Manager assigns CR to developer and tracks all changes from DEV to PRD system.

EAI Playbook 101

Page 107: 212458274 SAP Integration Playbook V1

Continues Evolution

Standard CR process for development/enhancements

New requirement/correction request by the business user

This requirement is validated by the change manager

– The change manager provides information on effort, duration, and cost for this CR

After approval of the CR, the CR is accepted by the SAP development team

– SAP Functional team provides the functional specifications and test cases

– SAP customizing is conducted (if needed)

– SAP ABAP development is conducted by the ABAP development team

– Unit testing is performed by the SAP team

The CR is created as a SAP Transport, and moved to SAP Quality system

– Testing is performed by the SAP testing team.

If defects found by SAP testing team, the CR is moved back to SAP development team.

Testing is performed by the business user

– If new change or additional requirement is requested by the business user, the CR is moved to the

SAP development team

After testing, the CR is approved by the change manager for production.

– The transport is moved to SAP Production system

The CR is closed.

EAI Playbook 102

Page 108: 212458274 SAP Integration Playbook V1

Continues Evolution

Handling bug fixing, issues, and user changes

This type of CR is also handled in same way as mentioned in above section.

EAI Playbook 103

Page 109: 212458274 SAP Integration Playbook V1

Continues Evolution

Urgent changes

These types of changes are directly conducted on the SAP Production system for the following cases:

Example

Screen text translation

SM59: System connectivity configurations

Functional configurations, such as number range configurations, etc.

EAI Playbook 104

Page 110: 212458274 SAP Integration Playbook V1

Continues Evolution

Outage processes

Planned outage processes

EAI Playbook 105

Page 111: 212458274 SAP Integration Playbook V1

Continues Evolution

Unplanned Outage Processes

EAI Playbook 106

Page 112: 212458274 SAP Integration Playbook V1

Continues Evolution

Best practices to avoid unplanned downtime

EAI Playbook 107

Page 113: 212458274 SAP Integration Playbook V1

Continues Evolution

Possibilities to secure a SPOF

Possible high-availability setups for SAP NetWeaver 7.0

EAI Playbook 108

Page 114: 212458274 SAP Integration Playbook V1

Continues Evolution

General recommendation

Release management

Purpose

You can use release management to summarize functionally related change states for components of a release order product. You can use workflow management to assign the release order to several employees, so that these employees check the completeness and consistency of the change states, and then release these change states for further processing. The release can involve several stages.

SAP workflow management is used in release management.

Introductory notes

Implement release management if your products consist of many components, and you use the SAP change management to document changes.

In the SAP-standard system, you can use release management for change states of item variants and color variants in Integrated Product and Process Engineering (iPPE).

Integration

You use iPPE. Release management was realized for the release object type Change State of Item Variant or Color Variant in iPPE. However, you can also program your own release object types.

For more information, see the section “Notes for Implementing Your Own Release Object Types.”

EAI Playbook 109

Page 115: 212458274 SAP Integration Playbook V1

Continues Evolution

Features

You can configure release management variably. Your release process can involve one or more levels. Every level can be assigned to one or several users that are responsible for the release.

The change states that the users are to check and release are collected for release orders.

In a release order, you create change states that are functionally related.

You can issue a release order as well as cancel it.

EAI SW upgrade/update processes

Decision factors

This unit lists important decision factors, including background description and rating support. The three most important — exclusive decision factors — are listed at the beginning. If your environment allows an upgrade in principle, the following subsections listing IT/operations-related and interface/scenario-related decision factors provide more insights into what needs to be considered:

Alongside these factors, your customer-specific environment and project and IT setup might dictate other critical points that should also be listed and rated for your project decisions. The following diagrams show our ratings in orange for each decision factor related to the questions that have to be answered. This can, but must not necessarily, be correct for your project. We recommend that you discuss these factors during the initial project phase within your team of experienced XI/PI consultants.

EAI Playbook 110

Page 116: 212458274 SAP Integration Playbook V1

Continues Evolution

Below is a checklist that you may use for reference once you have read the details of each factor. Note, that most of these questions do not have a simple yes or no answer and other factors (such as your current release version) might influence the conclusion drawn from the answers, so please read through the details first before you go through the checklist.

Exclusive decision factors

If one of the following three exclusive factors points your environment toward a new installation and phase-out option, you can stop here in the document and start with preparing your project for the only available option: In your case, the installation of SAP NetWeaver PI 7.3.

EAI Playbook 111

Page 117: 212458274 SAP Integration Playbook V1

Continues Evolution

Usage type

If you are running usage types like Business Intelligence (BI), Composite Environment (CE), Portal (EP), or Development Infrastructure (DI) in addition to PI in your productive system, then an upgrade can only be executed if the other usage types are upgraded at the same time. Note, however, that running multiple usage types with PI 7.3 is not recommended.

In this case, you must ensure that the other usage types are available, that they have an upgrade path to NetWeaver 7.3, and that your team is ready to upgrade them. This means that if you are running BI or CE with custom development that these applications have been vetted in the new version and are ready to be upgraded. As described above in Section 3.2, if you have a lot of custom development you should isolate the migration of the development to the new version from the upgrade.

SAP does not currently support a staggered approach to upgrading SAP NetWeaver when multiple usage types are in use, so you will have to upgrade all usage types at the same time. For this reason, SAP recommends that you perform a new installation with phaseout as opposed to an upgrade in most circumstances.

This current release restriction might be eliminated with future availability of the complete SAP NetWeaver stack, therefore, you should check for an updated version of the SAP Note 1407530 “Release Restrictions for SAP NetWeaver 7.3 — PI.”

PAM compliance

Check the Product Availability Matrix (PAM) for the supported operating systems and database releases running SAP NetWeaver PI 7.3 by first going to http://service.sap.com/pam and then searching for “SAP NetWeaver 7.3.”

Starting with SAP NetWeaver PI 7.1, only 64-bit operating systems are supported. If your productive system is currently still running on a 32-bit operating system, you will have to do an OS migration of your productive environment before you can technically perform an upgrade. As this additional step has to be considered for your project as an additional downtime phase of your productive system, you might be better off with a new installation on newer (faster) hardware instead.

However, you can first perform the OS migration of your productive systems by system copy mechanisms followed by an upgrade, if the additional downtime is acceptable for your business. The same argument is valid where your underlying database release is no longer supported with SAP NetWeaver PI 7.3.

EAI Playbook 112

Page 118: 212458274 SAP Integration Playbook V1

Continues Evolution

Downtime requirements

For planned downtimes requirements to your productive XI/PI system, you will certainly have to set up a SLA with your business departments relying on the middleware.

Depending on the maximum-allowed planned downtime (in hours), you might not have the option to go for an upgrade at all. Based on our experience, a well-prepared technical upgrade can be achieved with a downtime of up to four hours. The downtime of a PI system depends greatly on the existing runtime data, such as the number of active messages in the DB tables. Additional downtime, independent of SAP software, could include taking care of the OS, including Network/FTP and DB patches. We calculate for up to eight hours of manual rework after the technical upgrade, which includes changes to integration.

Directory objects, such as communication channel adjustments or required maintenance to activate newer features like message packaging. The technical upgrade procedure itself is already divided into an online preparation phase, where a parallel system environment with a new SAP JVM and more is created for your running system, followed by an minimal offline upgrade phase, which is used to reduce the actual upgrade downtime to a minimum — but you still need to execute manual steps afterwards to get everything up and running again.

If during an upgrade you suspect that it will exceed the maximum downtime allowed, you might have to decide at a predetermined moment in time whether to finish the upgrade, even if it is running longer than expected, or to restore the system and schedule another time for the technical upgrade of your system. During the preparation phase of the upgrade, you should have been able to calculate an estimated downtime for your productive system and, depending on the calculated downtime and the downtime requirement, you would have been able to work out if they fit or not.

If your business does not allow for more than one business day downtime (eight hours), our recommendation is that you consider installing a new PI 7.3 system and keep your old productive system running whilst all interfaces are manually redirected. In this situation, it would even be possible for you to switch interface scenarios back from PI 7.3 to the old environment if you were to encounter a critical showstopper situation with the new interfaces on PI 7.3, which you had been unable to detect during the validation phase.

Note, if you are performing an update from PI 7.1 or EHP1 for PI 7.1, then the downtime is normally less than the estimates given above, so keep this in mind when evaluating your maximum allowable downtime.

EAI Playbook 113

Page 119: 212458274 SAP Integration Playbook V1

Continues Evolution

IT/operations-related decision factors

High-availability setup

We assume that you have already configured a reliable high-availability setup for your mission-critical middleware system. Starting with SAP NetWeaver 7.0, SAP introduced a new option for configuring SAP NetWeaver systems using the ABAP Central Services, which can now be separated from the central instance in parallel to the SCS for Java. Each stack (ABAP and Java) has its own Message Service and Enqueue Service. For ABAP systems, the Central Services are referred to as ASCS and for Java systems the Central Services are referred to as SCS. The ASCS and the SCS are leveled as SPOF and, therefore, require a high-availability setup.

Previously, the only supported configuration was the ASCS that was integrated within the ABAP central instance — and, therefore, the ABAP system central instance also needed a HA setup. In the graphic above, this setup is represented with SCS/CI on the right side.

For PI 7.3, we recommend that you setup the system using the ASCS/SCS only. If you have already made the configuration changes to your productive XI/PI landscape, an upgrade will cover the features to PI 7.3. If your old system is still running on SCS/CI, you might want to start with a new PI 7.3 installation using the new configuration option for the ASCS/SCS HA setup from the very beginning.

Auditing requirement

Do you communicate business relevant data directly from your productive XI/PI system to external customers and partners? Is your business using payload modification on XI/PI? Do you create completely new messages in XI that cannot be reproduced outside of the original sending systems, due to temporary data retrieval?

If you answer ‘yes’ to any of these questions, we assume that you have a high level of demand for storing and making sure the messages processed and transferred via the XI/PI system are available to your business teams, and even to external auditors. In this case, you have a high requirement for storing historical message and message processing data of your productive system. Therefore, you should consider upgrading your environment, as this will best enable you to retain this historic data.

Alternatively, you have to keep the latest backup of your productive XI/PI system on hand and ensure that, if business needs require access later on, the system could be rebuilt and access to the message relevant data could be provided.

EAI Playbook 114

Page 120: 212458274 SAP Integration Playbook V1

Continues Evolution

If you have mostly internal communication going through your XI/PI system, or you use an additional EDI adapter for external communication that takes care of archiving and access issues, you might not be interested in historical data at all. In this case, you do not require audit-relevant information, but your business might still need to know about messages processed weeks ago. In this case, you have to rate how important the history data of messages processed in your productive XI/PI system is to you and your business.

Basic and operations customizing

“How intensively have you already built an operational backbone around your productive XI/PI system?” That question is the target of this criterion. Have you already integrated your XI/PI system tightly into your monitoring landscape? For example, does it consists of solution manager, Central Monitoring systems, and Firewall/DMZ settings? In this case, you might not want to invest again into the necessary configuration steps to set up a parallel new PI 7.3 system. Instead you want to use the existing configuration and simply upgrade your existing XI/PI system. Nevertheless, if you are planning on migrating to the new Solution Manager Central Monitoring, then much of the customization will be replaced by, or have to be modified for, the new monitoring. In this case, even if you do have a lot of customization, you might still be better off with a new installation. However, if you did not yet invest too much into the configuration, you might consider the effort acceptable to start from scratch and integrate, or start building, a new monitoring environment around your new PI 7.3.

Basis objects, such as alert configuration (alert categories and alert rules), interface archiving settings, and so on, can be transported to ease the transfer efforts. Other configuration changes, such as changes to the instance profiles and JEE properties, have to be reconfigured manually in the new PI 7.3 system.

Third-party adapters and content

SAP and software partners are taking efforts to ensure that the third-party adapters certified with older releases are also certified for PI 7.3 (this includes the Java-only Advances Adapter Engine Extended, also known as the AEX). SAP offers the solution catalog to check for software that has already been certified (http://www.sap.com/ecosystem/customers/directories/SearchSolution.epx). Additionally, you can contact the software provider to see how far they are with the PI 7.3 certification of theirs adapters.

One possible option for overcoming the nonavailability of these solutions for PI 7.3 would be to install a noncentral Adapter Engine (or Advanced Adapter Engine if you are currently using PI 7.1) in the productive XI/PI system that would contain the third-party adapters. You would then change the communication channel

EAI Playbook 115

Page 121: 212458274 SAP Integration Playbook V1

Continues Evolution

configuration and have them point to the noncentral Adapter Engine for execution, instead of the central Adapter Engine used previously. Next, you would upgrade the XI/PI system and keep the noncentral Adapter Engine on the old release for the time being, before the third-party adapters are available for PI 7.3, as well. More information about this intermediate solution using a noncentral Adapter Engine can be found in the release notes of SAP NetWeaver PI 7.3: What’s New in SAP NetWeaver 7.3 (Release Notes)>Process_Integration>Advanced_Adapter_Engine

Alternatively, you could decide to keep the scenarios using the third-party adapters running on the productive XI/PI system, and plan to investigate later into time and resources for redirecting the relevant interfaces completely to the newly installed PI 7.3 systems. Our recommendation, as listed above, is for a new installation of PI 7.3 whilst keeping the scenarios running in the productive system for now.

The same facts apply for delivered and certified XI content.

EAI Playbook 116

Page 122: 212458274 SAP Integration Playbook V1

About DeloitteDeloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a UK private company limited by guarantee, and its network of member firms, each of which is a legally separate and independent entity. Please see www.deloitte.com/about for a detailed description of the legal structure of Deloitte Touche Tohmatsu Limited and its member firms. Please see www.deloitte.com/us/about for a detailed description of the legal structure of Deloitte LLP and its subsidiaries. Certain services may not be available to attest clients under the rules and regulations of public accounting.

Copyright © 2014 Deloitte Development LLC. All rights reserved.Member of Deloitte Touche Tohmatsu Limited