141
SERIOUS DELIVERABLE D1.4 Case Studies on Platform Migration and Refactoring ••••••••••••••••••••••••••••••••••••••••• Project number: ITEA 04032 Document version no.: WP1 Deliverable 1.4 final version Edited by: WP1 partners ITEA Roadmap domains: Major: Services & software creation ITEA Roadmap categories: Major: Software engineering Minor: System engineering

Case Studies on Platform Migration and Refactoring

  • Upload
    others

  • View
    5

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Case Studies on Platform Migration and Refactoring

SERIOUS

DELIVERABLE D1.4 – Case Studies on Platform Migration and Refactoring •••••••••••••••••••••••••••••••••••••••••

Project number: ITEA 04032

Document version no.: WP1 Deliverable 1.4 final version

Edited by: WP1 partners

ITEA Roadmap domains:

Major: Services & software creation

ITEA Roadmap categories:

Major: Software engineering

Minor: System engineering

Page 2: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 2 of 141

WP1 Partners Public 29/08/2008

Table of Contents

1 ABSTRACTS ...................................................................................................... 3

2 INTRODUCING XUNIT SUBSYSTEM TESTS ALONGSIDE REFACTORING . 10

3 REFACTORING FOR PERFORMANCE ........................................................... 12

4 SOCKET LAYER CASE STUDY ....................................................................... 15

5 NOKIA S60 APPLICATION ANALYSIS ............................................................ 17

6 MRI CONFIGURATION FRAMEWORK ............................................................ 21

7 REFACTORING A DRM SYSTEM .................................................................... 26

8 MIGRATING MIP FROM JAVA TO MICROSOFT.NET ..................................... 31

9 MIGRATING TO A GRAPHICAL USER INTERFACE ...................................... 37

10 REFACTORING FROM HW TO DISTRIBUTED SW PLATFORM ................ 47

11 APPLICATION OF CONCERN ANALYSIS ................................................... 57

12 ANALYSIS OF NOKIA MAEMO PLATFORM ............................................... 63

13 ARCHITECTURE RECOVERY OF A LEGACY IMAGING SYSTEM ............ 70

14 EVOLUTION OF A LEGACY SYSTEM TOWARDS SOA ............................. 93

15 REFACTORING JEE APPLICATION TO SPRING FRAMEWORK ............. 101

16 GLOSSARY ................................................................................................ 134

17 REFERENCES ............................................................................................ 136

Page 3: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 3 of 141

WP1 Partners Public 29/08/2008

1 Abstracts

This document reports all case studies that have been done in the context of Task 1.3 Refactoring Evolving Systems of the SERIOUS Project. The goal was to put to practice the techniques developed during the project for supporting effective technical evolution of software systems and preventing software decay. The reported cases vary from reverse engineering models from the implementation (eg. architecture recovery) to refactoring (at various levels: architecture, design and code), detecting problems, and checking architectural conformance

The remaining part of this chapter presents a short overview for most of the case studies. All case studies are elaborated in the remaining chapters of the document.

1.1 Case study: Introducing xUnit subsystem tests alongside refactoring

Partner: Alcatel-Lucent / University Antwerp

During refactoring, a developer needs fine-grained tests to ensure the behavior-preserving nature of the applied operations. To support a refactoring project, a new level of fine-grained tests is introduced using a xUnit-style testing framework, to find defects earlier and grow confidence by frequent test execution.

In this case study, we first investigate the technical feasibility to create subsystem-level builds with integrated testing framework. Secondly, existing higher-level tests are translated towards xUnit. We furthermore inspect the internal quality of the test code (that has to be maintained too) and formulate some guidelines.

xUnit-style testing was perceived as easy (2-3PD) to introduce. Being widespread and with solid vocabulary and documentation on the Internet, it was also perceived as easy to learn.

1.2 Case study: Refactoring for performance

Partner: Alcatel-Lucent / University Antwerp

In this case study, the design of Broadband Access Node component was refactored to become more efficient in resources, i.e. reduced memory consumption and startup time. This would save the component from a re-implementation, as the system was close to resource exhaustion.

After a structured analysis, three points of improvement were identified: removal of wrappers for primitive data types, compacting the data layout of a PSD shape, and reducing the number of persistent tables.

This resulted in a reduction in memory usage of 51% and in startup time of 33% after the first step. Further on, the startup time was reduced by 10 times, and the memory footprint of a particularly large data type by 6.

1.3 Case study: Socket layer

Partner: Alcatel-Lucent / University Antwerp

See chapter 4.

Page 4: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 4 of 141

WP1 Partners Public 29/08/2008

1.4 Case study: Nokia S60 application analysis

Partner: Nokia

With more and more features been added to mobile phones as required by the markets, the size and complexity of the device software is growing significantly, while at same time the product development cycle is getting shorter. How to maintain and control the quality of software assets in a cost efficient way with ever growing size and complexity is a big challenge to Nokia. The system architects and program managers need techniques and tools to manage the complexity, to control the quality, and to support the evolution of the software systems. The case study analyzes a subsystem of the S60 software platform of Nokia smart phone / multimedia computer product family. Its goals were to help software architects and product managers in improving the software quality, system performance, software maintainability, and more over, offering a real time view of the problems, and can monitor and control the system evolution.

1.5 Case study: MRI configuration framework

Partner: Philips

Within the business MRI of Philips Healthcare, a multi-MLOC software archive of mainly .NET and C++ code is developed and maintained. This software currently supports a wide range of different scanner types from one single archive. In fact, the same software runs on virtually every scanner type. The mechanism through which this is achieved relies heavily on configuration parameters to change the behavior of the software.

A non-trivial amount of code - approximately 100 KLOC - has been written to access the configuration values. Although several layers can be identified in this code, in reality it forms an almost monolithic block. It is, for instance, virtually impossible to change the way configuration is stored without affecting almost every other layer of code.

In 2007, a configuration framework will be implemented, that is to replace existing configuration access code. Some of the design goals of this framework are:

A strict separation of data model (what values can be retrieved) from configuration technology (how are values retrieved).

Well-defined configuration upgrade-rules that allow configuration data from one version of the software to be imported into the configuration of another.

A separation of configuration interface from configuration storage implementation, making it easier in the future to change storage technologies.

1.6 Case study: Refactoring a DRM system

Partner: Philips

The main objective of this case study is to develop a robust client-server solution supporting DRM (Digital Rights Management) specifications, specifically the Open Mobile Alliance (OMA) DRM v2 standard for online media distribution to the PC platform.

These DRM systems offer a rights model that can be used by the owner of the content and by the buyer of the content. Using a DRM the owner and buyer agree on how the purchased content (e.g. obtained via the Internet) is to be used. Subsequently the

Page 5: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 5 of 141

WP1 Partners Public 29/08/2008

DRM system enforces this agreement. Possible rights on the content are for example ability to burn content to CD-R, ability to export to a portable device (secure and non secure devices) and time limited usage.

The client software is targeted for the Windows 32 bit environment (2000, XP) while the server is Java (J2EE) based and can run on Linux, Unix and Windows32 systems.

The Modena project is based on a client-server solution obtained from Liquid Audio. Philips has worked for more than a year now to make the product more robust, faster responding and generally to make it ―consumer-quality‖ software. Several refactoring techniques are used to achieve this goal like

Software metrics to measure the quality of various parts of the software

Change metrics to guarantee a certain quality level during refactoring

Code duplication detection tools

Automatic documentation extraction (doxygen)

Test Driven Development on components that needed to changed / redesigned

Automated tests during release procedure to prevent regression and to guarantee a certain quality level

Redesigning (sub) components using I-Mathic, a tool for translating requirements of a software (sub) system into a state-model that can be formally verified.

1.7 Case study: Migrating from Java to Microsoft .NET

Partner: Philips

MIP ( Medical Imaging Platform) provides generic connectivity, archiving, printing, database and viewing functionality that can be used in all product groups of Philips Healthcare (e.g. X-Ray, MRi, CT, and Ultrasound).

In 2002, the MIP code base was entirely written in Java, and interfaced with the client code from product groups through COM technology. Recently, it was decided to migrate to Microsoft .NET technology, both for implementation of the MIP code base and the interface with the product groups. This case study describes the migration of MIP from Java technology to .NET.

It is a large scale refactoring effort, covering about 1 MLOC, 2½ years throughput time, about 30 man-year and four major releases. Interesting aspects include:

Normal functional development continued while refactoring.

Three major releases are done while refactoring.

For backwards compatibility towards the product groups, the COM technology remains supported.

Bringing performance and security to a higher level.

1.8 Case study: Migrating to a graphical user interface

Partner: Philips

Philips offers a complete portfolio of MRI systems that are technologically advanced, yet simple to operate, increasing efficiency, ensuring more comfortable exam experiences for patients and providing superb diagnostic clinical results. Image

Page 6: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 6 of 141

WP1 Partners Public 29/08/2008

quality, ease of use, patient throughput and uptime of the system are important performance figures of the product.

Philips Healthcare delivers two types of MRI scanners, the cylindrical and the high field open system. Although the mechanics of the magnet systems are completely different many other system components as well the software platform is shared by product lines.

A suite of service applications is provided by the MRI system for use during the production, installation and maintenance phase of MRI systems. Remote execution of service procedures is also designed and implemented for parts of the service applications suite. This design is an enabler for providing high system availability and short response times on requests for service. Another important reason for having remote service designed in the system is reduction of travel cost.

The user interface technology to be used is called the Field Service Framework. This framework is a generic application, fulfilling basic requirements like user authentication and authorization use cases. Product lines, like MRI, have to build their own plug-ins for this framework. The user interface technology is currently based on Microsoft IIS with ASP and Internet Explorer as the thin client. The next generation Field Service Framework is designed around Microsoft .NET technology. The business logic of the MRI system should be in depended on the technology choice.

The main subject in this case study is re-factoring and migration from one user interface platform towards another user interface platform. A migration from a VT220 based user interface technology towards the Field Service Framework (FSF) user interface.

1.9 Case study: Refactoring from HW to distributed SW platform

Partner: Philips

The BRICS case study, that runs in the X-ray department of Philips Healthcare is a medium-scale software evolution project that runs for at least 3 years, it started in Q2 2005, and as of today, is in its 5th increment.

An initiative is setup in the X-ray department to move from a hardware-platform for real-time image processing to a distributed software environment. This new image processor architecture is envisioned as a (set of) image processing (IP) service(s). Interesting challenges for the BRICS case study are:

Define an architecture that and ensures real-time behavior and of course also support the exposed services.

Fast incorporation of new IP algorithms.

Achieving re-usability by offering high-quality software IP components.

The rational behind this platform approach is the following: A X-ray system has many hardware and software subsystems, several of which may be provided in the future

Page 7: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 7 of 141

WP1 Partners Public 29/08/2008

with a service interface. By having the various subsystems provide their interfaces to external parties, it becomes much easier to integrate these subsystems with other systems (third party-components or other medical imaging devices).

The deliverable of BRICS is a component suite consisting of a) framework or middleware layer, b) IP modules, and c) tools and utilities.

In the X-ray department of Philips Healthcare, an initiative is setup to move from a hardware-platform for real-time image processing to a distributed software environment. This new image processor architecture is envisioned as a (set of) service(s). A service is seen as a function that is well-defined, self-contained and does not depend on the context or state of other services. It is responsible for the state of one distinct block of backend data; hence a service is very suitable to be the outside world interface for a device or subsystem.

Interesting challenges for this project are:

Define the right granularity for the exposed services.

Implement them using the right level of security.

The introduction of these IP services will be done gradually, starting with the most important ones, and gradually opening up more and more functionality to external clients.

The rational behind this platform approach is the following: A X-ray system has many hardware and software subsystems, several of which may be provided in the future with a service interface. By having the various subsystems provide their interfaces to external parties, it becomes much easier to integrate these subsystems with other systems (third party-components or other medical imaging devices).

This is a medium-scale evolution project that runs for at least 3 years, at least four major releases planned.

1.10 Case study: Application of concern analysis

Partner: Tampere University of Technology / Nokia

See chapter 11.

1.11 Case study: Application of Nokia Maemo platform

Partner: University Antwerp

In this case study we re-document the architecture of an large scale Maemo platform by reverse engineering the interactions between code-level entities and abstracting them to a higher level, i.e. interaction between the building blocks of the application (the subsystems). Secondly, we apply this approach to a number of versions throughout history, to evaluate the stability of the evolving system.

Secondly, to assess the maintainability of the evolving system, we use a selection of quality models based upon ISO 9126. In particular, we use lines of code to assess analyzability and cyclomatic complexity to assess testability.

By applying multi-version analysis we can evaluate whether the internal quality is increasing or decreasing, and more specifically, identify building blocks that score poor in maintainability and keep on decreasing, thereby becoming the primary targets for focused refactoring.

Page 8: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 8 of 141

WP1 Partners Public 29/08/2008

1.12 Case study: Architecture recovery of a legacy imaging system

Partner: Universidad Politécnica de Madrid

This case study will be executed on a medical imaging product, based on Java, supplied by Ibermática. It is currently in use in several Spanish hospitals. The system allows doctors to visualize high resolution medical images and manipulate them applying several transformations.

This case study will document the architecture of the existing system, with a special focus on the quality-related aspects. The architecture recovery process will be based on QAR (Que-ES Architecture Recovery), a generic recovery workflow based on the traditional Extract-Abstract-Present paradigm. The process will be adapted to the specifics of the system. Some highlights of the case study are:

Use of general purpose, widely-used, visual tools. Instead of the existing recovery-specific frameworks, such as Moose or Rigi, the case study will choose some well-know modeling or profiling tools, such as Omondo UML or Eclipse TPTP. These tools allow a continuous visualization of the system, which should be very valuable in this type of processes. Thus, this case study will evaluate their suitability for architecture recovery activities.

Combined analysis of static and dynamic views.

Use of software metrics to aid the recovery process and evaluate the quality of the system.

1.13 Case study: Evolution of a legacy system towards SOA

Partner: Universidad Politécnica de Madrid

This case study will be executed on a medical imaging product, based on Java, supplied by Ibermática. It is currently in use in several Spanish hospitals. The system allows doctors to visualize high resolution medical images and manipulate them applying several transformations.

The objective of the case study is to evolve the legacy system for improving its quality. The evolution will address the main concerns of both users (gathered from surveys) and developers with the system. The main areas of improvement are: user experience (usability and performance), system maintainability and interoperability with other medical systems. These requirements plus the documentation obtained from the previous case study will be converted into an evolution plan which will guide the process. Currently the case study is composed of the following stages, although the list may experience some changes during the case study:

1. Platform migration towards SOA: The legacy system will be refactored in the architecture level to a SOA model. For this concrete case we have chosen the OSGi Service Platform as our component model of choice, with the Equinox implementation as the base technology. The system will be refactored into a set of dynamic, loosely coupled services (OSGi-services and bundles).

2. Replacement of the User Interface: In order to improve the quality of the user experience the UI of the product will be replaced by a substitute, which should prove to be more extensible, customizable and attractive. The chosen model for the new GUI is the RCP (Rich Client Platform) model.

3. Add connectivity functionality with remote imaging servers via WADO (Web Access to DICOM Objects)

Page 9: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 9 of 141

WP1 Partners Public 29/08/2008

1.14 Case study: Refactoring JEE application to Spring framework

Partner: Universidad Politécnica de Madrid

See chapter 15.

Page 10: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 10 of 141

WP1 Partners Public 29/08/2008

2 Introducing xUnit Subsystem Tests Alongside Refactoring

2.1 Problem Statement

2.1.1 Domain

During refactoring, a developer needs fine-grained tests to ensure the behavior-preserving nature of the applied operations. In a refactoring case study on a Transport Layer protocol implementation of a Broadband Access Node, a particular data structure has to be replaced.

2.1.2 Current Situation

In the current test strategy, subsystem testing is the most fine-grained testing activity. As there are no strict rules describing how to perform these, the actual approach varies per developer team. Some teams test manually in a command line environment based upon test case specifications, others write codified tests in a self-composed test framework, still others use a scripting language.

2.1.3 Goals and Expected benefits

In order to control the refactoring process, we need a process that stimulates quick error detection (lower cost per defect) and adds to the confidence of the developer as he is progressing. This requires a testing approach that consists of a regression batch of fine-grained test cases that can be rapidly executed after every major refactoring operation. As such, developers can (i) write tests that are focused on the units to be changed and (ii) frequently execute the regression set without being annoyed by the delay. During this case study, we first investigate the introduction of xUnit style subsystem tests integrated as part of a test build (technical feasibility) to achieve this goal. Secondly, we study how existing test cases can be reused in this framework. Finally, we quantify the learning curve associated with adoption this kind of codified tests.

2.2 Solution

2.2.1 Approach

The xUnit family of testing frameworks originates in the agile development circles, providing the infrastructure to codify automated, explicit, repeatable, independent and fast unit tests that provide rapid feedback during code-test development cycles. Actual implementations such as JUnit (Java) and NUnit (.NET) are the de facto standard for unit testing (and beyond) today. In the scope of this case study, we chose to apply xUnit to introduce tests for the interfaces of the subsystem under study.

2.2.2 Major Results

A proof of concept has been integrated in a test build, and some test cases have been transformed from another test environment currently in use. The deployment and learning curve are moderate; this study took 2-3 PD.

Secondly, we reviewed this test code to increase its internal quality. We refactored to reduce the amount of duplication by promoting reuse in test cases following patterns in the handbook.

Page 11: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 11 of 141

WP1 Partners Public 29/08/2008

Thirdly, we investigated how batches of automated, existing tests launched in the command line environment could be made more xUnit style – a smaller increment than transforming them. We described how the test framework would need to be modified

2.2.3 Success Indicators

The case study was successful as it met the desired goals:

1. The integration in a test build was a minor task

2. Adapting xUnit and transforming some existing tests was a limited PD job.

3. We documented – in the form of patterns – how the techniques that we composed increase the internal test quality (i.e. reducing duplication and introducing abstraction)

4. After the case study, the composed subsystem testing framework has been deployed for other subsystems as well.

2.3 Conclusion

2.3.1 Summary

The introduction of xUnit style tests to (i) become the standard subsystem test environment and (ii) serve as test harness during refactoring resulted in a working proof of concept. The techniques that were used are documented as patterns in the refactoring handbook.

2.3.2 Lessons Learned

Introducing xUnit–style testing proved straightforward and evoked enthusiasm with the involved developers. In the mean time, this kind of tests has been written for another subsystem as well.

Where transforming existing tests proved to be easy, it is not considered efficient when considered in a project context. Focusing on codifying tests for subsystems where manually testing is applied can be better motivated in cost/benefit terms.

2.3.3 Final Recommendations

Before starting a refactoring project, the presence of tests should be a precondition. In their absence, one should incorporate effort to introduce tests. Writing xUnit style tests targeting the interfaces of the system under study can be a time efficient manner to obtain a regression suite.

Page 12: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 12 of 141

WP1 Partners Public 29/08/2008

3 Refactoring for Performance

Summary of a case study that is reported about in the paper Refactoring for Performance: An Experience Report by Matthias Rieger, Bart Van Rompaey, Bart Du Bois, Karel Meijfroidt and Paul Olievier that was accepted to the Third ERCIM Symposium on Software Evolution and Matthias Rieger, Bart Van Rompaey and Serge Demeyer. Refactoring State Machines. accepted at the Sixth Nordic Pattern Languages of Programs Conference (VikingPloP). This experience gathered in this case study is also written as a pattern in the Refactoring for Performance chapter in the Refactoring Handbook.

3.1 Problem Statement

3.1.1 Domain

Our pilot project consists of a 130 kSLOC C++ subsystem, part of a Broadband Access Node, managing configuration parameters for network lines. The system is implemented in a distributed fashion with one Controller Card, maintaining the configuration database, and several Line Cards; CORBA-like middleware on top of Ethernet ensures communication between the different processors. Middleware services such as memory management and persistency are provided by the framework. A home grown code generator creates the source code from specifications. One design goal for this framework was to make every data element look uniform so that data manipulation in client code, be it streaming for communication or persistency, could be handled in a common manner (simplifying, for example, the code generator). This especially meant to represent primitive types as objects as well.

3.1.2 Current Situation

The system has been around for three years. It is currently dangerously close to exhausting the memory and runtime reserves of the hardware components it was initially deployed on. New requirements ask for the number of spectrum profiles, one of the configuration parameter types, being doubled.

3.1.3 Goals and Expected benefits

The goal of the case study constitutes of (i) identifying root causes of the poor performing system, (ii) proposing a more memory-efficient design and (iii) refactoring the current implementation to that solution.

In particular, the performance goals were stated as:

Improve performance of reading a spectrum profile at start-up with a factor 10.

Improve memory footprint of spectrum profile to acceptable values which is 2 times the raw data size.

Drastically reduce complexity in the profile handling cod (set request).

A complete rewrite is necessary if the system can not be redesigned to perform under the new requirements.

3.2 Solution

3.2.1 Approach

We followed a structured approach, consisting of (i) getting a Specific Problem Description using developer interviews and static analysis, (ii) Identifying Improvement Opportunities that would substantially improve the performance; and (iii) estimating

Page 13: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 13 of 141

WP1 Partners Public 29/08/2008

effort, risks and gains of this refactoring proposal. We identified three points of improvement:

1. Refactoring the wrapped primitives (an anti-pattern that we call Redundant Objectification) to primitives promised to resulting in a considerable memory gain (estimated to by -82% using a static count of objects in memory at runtime).

2. Handling PSD shapes, a type of large configuration parameters, as an array of basic types instead of a more complex data layout.

3. Reduce the number of persistent data tables (tables written do disk) from 35 to 16.

To mitigate the main risks of dealing with unknown code and introducing regressions, we applied a stepwise refactoring approach and introduced tests along the refactoring to safeguard changes made to a code generator.

3.2.2 Major Results

As a result of the Redundant Objectification refactoring, the memory usage of this subsystem was reduced by 51% while startup time improved by 33%. This reduction, although the static estimation of the gains proved not to be very accurate, freed enough memory for upcoming requirements. The two subsequent steps reduced the time to read one spectrum profile even more, from 1.8 seconds to 0.16 seconds. The associated memory footprint of spectrum profile has improved from 12 Kbytes to 2.15 Kbytes.

Along the way, whenever complex code structures were discovered, the design was reconsidered and refactored appropriately. This proved valuable especially in the profile handling component. Its complexity was measured using the metrics maximal nesting depth per function and cyclomatic complexity per function. Maximal nesting depth was reduced from 6 to 3, while cyclomatic complexity improved from 52 to 8.

3.2.3 Success Indicators

The series of refactorings applied to the subsystem under study allowed the developers to implement the set of features scheduled for the next release, due to the availability of enough free memory.

3.3 Conclusion

3.3.1 Summary

We employed a structured approach to identify and tackle performance problems in a C++ system. Starting from a generic problem description, we used a combination of developer interviews and static analysis to identify the most beneficial areas of improvement. Note that mostly due to the deep knowledge that we could tap into by asking developers familiar with the system, it was not necessary to employ dynamic analysis to locate the center of the problem. We subsequently proposed a more efficient design and assessed the expected gain as well as the effort and possible pitfalls of the refactoring operation. This allowed us to select and apply a development strategy ensuring the behavior preserving nature of the restructuring. In this pilot project we were confronted with a system that suffered from a performance problem typical for embedded software, similar to and illustrating earlier performance anti-pattern descriptions. We contribute an anti-pattern named Redundant Objectification.

Page 14: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 14 of 141

WP1 Partners Public 29/08/2008

3.3.2 Lessons Learned

The lessons learned can be summarized as:

A pure OO implementation in an embedded context can become too much memory consuming. Refactoring towards a hybrid solution may form a solution.

A combination of developer interviews and static analysis can be a suitable technique to find root causes of poor performance.

There exist some documented performance anti-patterns in literature that developers should be aware of.

3.3.3 Final Recommendations

As a contribution to the debate about using object technology in embedded systems we learned that performance considerations must be heeded when dealing with the constraints of embedded systems. It may become necessary to relinquish the elegance and simplicity of a uniform object–oriented design and seek a hybrid objects–and–primitives solution. This requires detailed knowledge of the problem domain, to decide where the trade off of improved performance vs. the hassle of mixing programming paradigms is most favorable. An interesting question in this respect is to ask where the cohesion of the data–behavior combination represented by an object is weakest, due to either the simplicity of the data structure, the light-weightiness of the behavior half, or both characteristics combined.

Page 15: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 15 of 141

WP1 Partners Public 29/08/2008

4 Socket Layer Case Study

4.1 Problem Statement

4.1.1 Domain

The case study concerns a refactoring of a Socket Layer subsystem, part of a Broadband Access Node. The Socket Layer subsystem is part of the communication embedded software for both internal (between boards in the Network Element) and external (between the Network Element and the external Management System) communication.

4.1.2 Current Situation

The Broadband Access Node is developed as a number of products variants, which are all part of the same product family, and they also share a lot of the software. Between these product variants, the Socket Layer subsystem implementation has diverged a lot between two variants using compiler directives. However, the Socket Layer subsystem should have been kept generic and common between these two product variants since the maintenance and extendibility of the subsystem has become problematic.

4.1.3 Goals and Expected benefits

The goal of this case study is to refactor the Socket Layer subsystem so that it becomes a true common subsystem that fits all product variants of the product family. This will reduce the maintenance effort for this subsystem since it will be common.

At the same time, new functionality has to be added in a generic way in order to support a new product that will become part of the same product family. This will be the most challenging part of this case study.

4.2 Solution

4.2.1 Approach

First a Unit Test suite will be developed that is able to test all the current functionality of the subsystem for both variants.

After that the subsystem will be refactored. All compiler directives (used to make variants at compile time) will be removed from the subsystem. The Unit Test suite will be used during this transformation to see that supported functionality is not broken.

Finally the new functionality will be added and Unit Test suite will be extended to cover the new functionality required for the new product while keeping the already supported functionality.

4.2.2 Major Results

The major result is to have one common generic Socket Layer subsystem without any compiler directive, which is serving the needs for all products of the product family.

A well performing and automated Unit Test suite that is able to test all functionality of the subsystem.

Page 16: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 16 of 141

WP1 Partners Public 29/08/2008

4.2.3 Success Indicators

Less maintenance effort needs to be spent on this subsystem for the complete product family.

4.3 Conclusion

4.3.1 Summary

By first making an extensive Unit Test suite, the existing functionality of the Socket Layer subsystem for both product variants could be tested after each small step of the refactoring activity. Gradually all product variant related compiler directives were removed one after the other without breaking existing functionality. The code became also more comprehensible without all these compiler directives.

4.3.2 Lessons Learned

While writing the Unit Tests for the existing subsystem, the designer learned to understand better the required functionality of the subsystem and its behavior in normal and exceptional conditions. This allowed introducing the requested new functionality in the most optimal way with less effort.

4.3.3 Final Recommendations

When introducing quite some new functionality in an existing peace of software and in case there are no Unit Tests for this software yet, writing Unit Test suite for the existing functionality first, pays off later when the new functionality is introduced. The peace of the software is better understood, the risk for breaking existing functionality is brought down to the minimum.

Page 17: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 17 of 141

WP1 Partners Public 29/08/2008

5 Nokia S60 application analysis

5.1 Problem Statement

5.1.1 Domain

This Nokia case study is about S60 application subsystem analysis. S60 is the software platform of Nokia smart phone / multimedia computer product family. It is built on top of Symbian OS and supports the implementation of the advanced UI features including rich multimedia functions. The market‘s share of Nokia smart phone / multimedia computer devices is growing very fast and creates more and more value for the company.

This case study deals with software (source code) analysis, including reverse-engineering, OO metrics for software quality [106] [2], static analysis and performance indication, suggestion for refactoring, evolution monitoring, etc.

5.1.2 Current Situation

With more and more features been added to mobile phones required by the markets, the size and complexity of device software are growing significantly, and at same time the product development cycle is getting shorter. How to maintain and control the quality of software assets with ever growing size and complexity, in a cost efficient way, is a big challenge to Nokia. The system architects and project/program managers need techniques and tools to manage the complexity, to control the software quality, and to support the evolution of the software systems.

5.1.3 Goals and Expected benefits

The goals of this case study are to develop and test:

Methods and tools for quality assessments and source code quality monitoring of Nokia platforms, to provide objective and in-time system wide overview of the software quality.

Methods for getting architecture/component level indications of performance refactoring via static software analysis.

The above mentioned methods and tools should be applied on the analysis of Nokia S60 platform applications, and help in improving the software quality, system performance, software maintainability, and more over, the architects and product managers can have the real time view of the problems, and can monitor and control the system evolution.

5.2 Solution

5.2.1 Approach

In the case study on S60 subsystem analysis, we take the following steps in our approach:

setting-up the tool environment for analyzing a S60 subsystem;

selecting and calculating the software metrics for measuring the software quality at both class and component level;

viewing and presenting the analysis result using the web based monitor tool;

Page 18: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 18 of 141

WP1 Partners Public 29/08/2008

generating class diagrams and component dependency diagrams from the source code by using the reverse-engineering tool;

using the static analysis results from the above steps to pin-point the performance problems

An important application subsystem (A-application here after) in multimedia phone has been selected as the target of the analysis. A-application is an important and big application, its feature list and size keeps on growing. The application structure also changes between different releases. It has well-known performance problems; hence it can be used to validate our approach for identifying the refactoring points for performance problems. Through this case study we try to find out if there is any relation between the source code quality indicators and the performance problem in the A-application, and see if the static analysis of source code could give us any useful hints of bad runtime performance.

In addition, we try to use static analysis to predict the maintainability of software and the testing effort that will be spent in the testing phase.

This case study utilizes the whole Columbus tool set from Columbus [3] reverse-engineering framework/tool to Monitor tool [4]. Columbus tool set is a static analyzer for C++ source code. The tool set contains the Columbus reverse-engineering framework/tool, the SourceAudit tool, and the Monitor tool. The Columbus reverse-engineering tool generates the reverse-engineered source code model at compilation time, and then calculates the metrics. The SouceAudit tool, based on the reverse-engineering framework, defines the coding rules/conventions and detects the non-conformance and bad smells through source code analysis. The Monitor tool provide a web based user interface for viewing the analysis the results generated by the reverse-engineering and source code analysis tools.

We concentrate on quality metrics like size, complexity, couplings, inheritance, and clone coverage that Columbus reverse-engineering framework/tool produces. We also study other outputs that Columbus tool set give us, for instance, what kind of pictures of implementation we can retrieve from Columbus.

5.2.2 Major Results

There are the main findings from the static analysis:

6 complex classes out of 141 classes, each has over 1000 logical lines of code, and all of them are suffering from lack of cohesion;

13 highly coupled classes out of 141 classes;

Inheritance hierarchy of the application is shallow and narrow, the indication of this need to be studied further.

The evolution of 3 consecutive releases of A-application has been monitored, and there is no sign of improvement. The largest class is getting even larger. All the analysis results show that measures of improvement and refactoring must be taken as soon as possible. It is a well-known fact that more complex classes are harder to maintain. The 6 complex and 13 highly coupled classes found during the study are affecting on maintainability of the A-application. However, we do not yet have deep enough understanding to estimate the exact cost benefits gained by reducing maintainability efforts.

Performance hints are also derived from static analysis, including frequently instantiated large classes, instantiation of classes that are implementing multiple

Page 19: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 19 of 141

WP1 Partners Public 29/08/2008

responsibilities. More coherent classes with the right size could improve performance, one class – one concept.

5.2.3 Success Indicators

The tool environment for Nokia S60 platform software analysis has been set-up and taken into use. The analysis of the selected subsystem revealed important issues for improvement/refactoring. We do not currently have information on the projected added value in terms of money for the proposed improvements to A-application and the S60 platform, but the static analysis results have been indicated as being useful for identifying performance problems and providing valuable indications for improvements.

5.3 Conclusion

5.3.1 Summary

In general the case study has achieved its goals and delivered useful results. Although attaching Columbus reverse-engineering framework/tool to the S60 build environment requires some modification efforts, Columbus reverse-engineering framework/tool is easy to use after the needed modifications are done. The instructions of how to install and configure the tool, a video clip step-by-step guide of how to use the monitoring tool are created for real software developers/managers to use the tool environment. The tool environment can be access through the company intranet from the project wiki page. All the analysis results have been documented and communicated to the software architects and development team.

5.3.2 Lessons Learned

Columbus reverse-engineering framework has been found as a feasible candidate for analyzing subsystems of S60 platform. However, it is still not able to handle the platform wide analysis of S60. Columbus reverse-engineering framework can produce a picture of component level call dependency diagram with moderate effort. In addition, it can generate class level XMI UML diagrams. These generated diagrams make it possible to compare real implementation diagrams to design diagrams. Comparing the implementation and design diagrams can reveal the inconsistencies between the two at both component level and class level. One possible cause of the inconsistencies can be the design documents have not been updated to reflect the actual changed in the implementations. Further investigations are needed for all those inconsistencies found during the analysis.

The Monitor tool is a web application (Java applet), it is easy and intuitive to use. It is a critical part of Columbus tool set to make the final analysis and result presentations very convenient. It is easy to query the metrics of classes, draw diagrams, etc. Saving frequently performed queries as shortcuts is a good way to make queries easily accessible, especially for those complex queries. More important, it can be used to view the results from different releases to monitor the evolution.

When a class is more than big enough, it should be thought whether the class should be split into several classes. Splitting a class into several classes will cause extra implementation work that might even delay the tight schedule, but it should pay later as easier maintenance effort. Smaller classes that are simple are usually easier to maintain and adding features for these is easier. It is difficult to say in general when class is too big. This must be solved case-by-case.

Page 20: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 20 of 141

WP1 Partners Public 29/08/2008

When using static analysis results to predict performance/run-time problems, these are not certainties but indicators – similar to bad smells, and they should be verified with runtime analyzer. Architects that are responsible of the analyzed system are the right persons to make the final judgment from those symptoms. Static analysis can be used as a starting point in runtime analysis.

Page 21: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 21 of 141

WP1 Partners Public 29/08/2008

6 MRI Configuration Framework

6.1 Problem Statement

6.1.1 Domain

The Magnetic Resonance Imaging (MRI) Software department of Philips Healthcare writes system- and application software that runs on a wide range of different MRI systems. For a large part, this software is delivered from one archive and deployed through one installation set.

The single-archive approach implies that the software‘s variety of behavior is for a large part determined at runtime. The software that deals with this variety is commonly known as Configuration Framework.

Conceptually, the Configuration Framework consists of a fairly simple database filled with name-value pairs. The design of the framework, however, is not trivial. This is mainly due to constraints like validity checks, having to comply with uniform user interfaces within Philips Healthcare and the necessity to upgrade configuration databases from almost any previous configuration.

6.1.2 Current Situation

6.1.2.1 Mix of API and Data Model

The Software Interface of the configuration framework consists mainly of C-style functions, one for every attribute that needs to be retrieved. Configuration data itself is stored in a tree-like structure, populated with objects and attributes of distinct types. The shape of this tree and the types of the attributes are commonly referred to as the configuration data model. Below the C-style interface are several software layers that deal with issues like navigating the tree-structure of data, caching data locally, retrieving data from a configuration database, translating attributes from one data type to another, calculating attributes from other attributes, etc.

The C-style interface has resulted in a mixing of data model and framework code. For instance, a function like ―get_site_altitude()‖ implies that the site altitude is part of the data model, while the function itself is part of the software framework. This same mix also means that an engineer responsible for changes in configuration needs to have intimate knowledge of both the framework and the domain of the variables that are kept in that framework.

6.1.2.2 Upgrade and Maintenance

Upgrade of configuration data from one version of the software to another is normally performed by scripts that very explicitly translate the text of one (old) xml-file to that of a new one. These ever-growing upgrade scripts have proven to be an increasing maintenance burden.

Maintenance of the configuration framework is labor-intensive. An average change in the data model requires changes in approximately 20 files, including documentation.

The configuration framework is fairly rigid and cannot cope with different providers of configuration data. This has caused challenges when MRI applications needed to be ported to new environments.

Page 22: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 22 of 141

WP1 Partners Public 29/08/2008

6.1.2.3 Code Architecture

Configuration software is exceptionally pervasive; many MRI Software source-files contain one or more calls to configuration functions, or are in some indirect way dependent on configuration data.

An important additional challenge is the fact that the MRI software archive has many parallel branches with non-trivial deltas between them. Together with the pervasiveness of configuration software, this means that a substantial change in configuration software can prove to be very costly when such a change needs to be merged to other branches of MRI software.

The current configuration framework started out as simple software that performed a simple job, but has grown over the course of the years into complex software that performs many simple jobs. Current complexity of configuration software has made it difficult to change the software any further, or to reliably predict what the cost of such changes will be.

6.1.3 Goals and Expected benefits

The goals of the Configuration Framework Improvement (CFW in short) can be summarized as follows:

Decreased Maintenance Effort; This is the primary goal, and many of the other goals mentioned are actually sub-goals of this one. Maintaining framework and data model is currently estimated to cost in the order of 1 full-time equivalent. This is expected to drop to at most 0.25 FTE.

Increased Portability; MRI applications need to be deployed on a growing number of platforms, where they should fit in with existing methods of storing configuration data. It is therefore necessary that the configuration framework can adapt to different platforms, so that applications can run unchanged on those platforms.

Reduction in Code Size; This can be achieved by separating framework from data model. No longer will there be a function for every configuration variable. A feasibility investigation showed that a reduction of around 50% in code size for configuration-specific software is possible. A reduction in code size was also seen as an important indicator of successfully averting the so-called second-system effect, where a replacement for a small, working system becomes a feature-laden overly large construct.

Automated Upgrade/Downgrade; Hand-written upgrade/downgrade scripts are to be avoided, since they have proven to be a significant maintenance burden and a source of errors that are found only late in the development process. Most actions that are taken during upgrade can be described in very generic rules and upgrade software that is based on such generic rules could deal with a large set of upgrade scenarios without the need to be adapted for specific upgrades.

6.2 Solution

A redesign of the configuration framework was performed. The following figure shows the main concepts, which will be detailed in the rest of this Section.

6.2.1 Approach

The following modifications were made to the configuration framework:

Separation of ‗storage drivers‘ from the main configuration framework to allow configuration storage to become a variation point.

Page 23: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 23 of 141

WP1 Partners Public 29/08/2008

A strict separation of data model from framework code was introduced.

A legacy wrapper was introduced that emulates the old API to the many clients that still use it.

These modifications where made in a separate archive. The new configuration framework is then introduced one-by-one in each parallel branch of the software archive.

Note that this approach can be classified as a ‗Big Bang‘ type of change. More iterative alternatives have been considered, but were discarded because no meaningful iterations where discerned.

6.2.1.1 Storage Drivers

A great deal of configuration software is agnostic to the actual source of configuration data. This part that deals with issues like caching and distribution of data, type conversion, validity checks, configuration UI, etc. has been written to obtain its data through a new interface, referred to as ‗Hierarchical Interface‘. Polymorphic implementations of this interface allow fetching of data from various sources. The hierarchical nature of the interface also allows composition of different data sources into one unified ‗tree of data‘.

6.2.1.2 Data Model Separation

In this specific case, data model separation ultimately means that code that used to look like this:

CONFIG_get_site_altitude();

Now looks like this:

config.get( SiteAltitude );

Obviously, there are many more changes to the code than just this visible effect. This change intends to separate the volatile parts—the data model—from the more stable framework code.

Separation of data model is a necessary condition for the separation of concerns: experts in the MRI or hardware domain must be able to modify the data model to suit their needs, without forcing them to change the configuration framework itself.

6.2.1.3 Legacy Wrapper

As stated before, dependencies of configuration software are permeated throughout MRI Software. When refactoring, there are two obvious ways to deal with this:

Change all occurrences of configuration software use to use the new interface, possibly by some form of automatic source code transformation.

Create a thin wrapper of one-line functions that exposes the ‗old‘ interface and translates this into calls to the new interface.

Because MRs many-branched software incurs a very real and non-trivial merge cost for large changes, the decision was made to create the thin wrapper.

Page 24: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 24 of 141

WP1 Partners Public 29/08/2008

6.2.2 Major Results

At this point in time, the introduction of new configuration software in a single branch has succeeded. Old code has been replaced with new code and the transition did occur nearly as smoothly as anticipated.

Polymorphic implementations of configuration storage—for databases, files and for calculated values—have been created and are being used. Anecdotic data suggests that implementing such storages can be done with limited effort, but this remains to be verified by implementations for new platforms.

Measurements that were made during development of new configuration software have indicated that the end result would be significantly smaller than the old one. An investigation is ongoing to determine whether the 50% mark was made after full integration with existing software. Data model separation has helped, but does not seem to be the largest contributor. Early indications are that simply replacing patched source code with long maintenance histories by new code, designed for its current requirements, contributes as well.

6.2.3 Success Indicators

Of the four goals specified earlier, two have been reached. Two others have not fully been determined yet, due to their long-term nature:

The main driver for the new configuration framework has been the reduction of maintenance effort. It will take some time after introduction to obtain enough data of sufficient quality to make judgments. Tests have shown that changes in the data model are now less scattered (on average 3 files changed in contrast to 20 files), which could be an indication of a reduction in future maintenance effort.

Of the sub-goals mentioned in Section 6.1.3, two have been reached: code size was reduced and current implementations of configuration storage indicate that new ones can be made with very limited effort, leading to improved portability.

Automated upgrade/downgrade was tested successfully on large sets of historical data. However, just like maintenance effort, this indicator needs to be collected over a longer period of time in actual field use to make confident judgments.

6.3 Conclusion

6.3.1 Summary

MRI software contains complex code to deal with variability. Changing requirements and a long maintenance history have made their marks on the current implementation. A new framework was created that encompasses a strict separation of data model and framework code, polymorphic drivers for storage of data and which comes with a small layer that provides a legacy interface.

Software for automated upgrade/downgrade was written to reduce maintenance effort in this respect.

There is, at this time, not enough empirical data to make confident conclusions about maintenance effort or the effectiveness of automated upgrade. Two other indicators, code size and the number of files that need to change for routine maintenance, show the new framework to be an improvement over the old one.

Page 25: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 25 of 141

WP1 Partners Public 29/08/2008

6.3.2 Lessons Learned

A summary of lessons learned during the development of the new configuration framework would be the following:

As with many refactoring exercises, configuration framework refactoring consisted for an important part of re-discovering requirements. Involving ‗local experts‘ can make a huge difference in finding most hidden requirements early. Requirements that were not found with the help of local experts tended to pop up extremely late in development.

We did not see any way to develop configuration framework in a more iterative way than we did, but certainly more iterations would have countered late-found requirements.

Design for Maintenance is hard to justify, since the costs of maintenance can be difficult to quantify. It is important to make credible—and realistic—estimates, in order to obtain buy-in of all involved parties.

Direct indicators of maintenance cost cannot be measured during, or directly after development. Indirect indicators, such as code size, can.

Code size is a promising indicator to use, to keep the second-system effect in check.

6.3.3 Final Recommendations

Configuration framework has shown that refactoring for maintainability is possible and can lead to positive results. Still, when refactoring, refactor as iteratively as possible, because even for this type of development, late-found requirements can occur. If new software is to replace some older software without directly expanding its functions, it is worthwhile and feasible to strive for software that is not bigger—in terms of code size—than the software it replaces.

Page 26: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 26 of 141

WP1 Partners Public 29/08/2008

7 Refactoring a DRM system

7.1 Problem Statement

7.1.1 Domain

There is a market need for a system that enables online media distribution. In particular such a system manages the digital rights so that illegal copying can be prevented. These systems are known as DRM (Digital Rights Management) systems, and usually consist of a client-server solution, where the owner of the contents manages the server, and the client is the consumer‘s PC.

The main objective of the project is developing a robust client-server solution supporting DRM specifications, specifically the Open Mobile Alliance (OMA) DRM v2 standard for online media distribution to the PC platform.

These DRM systems offer a rights model that can be used by the owner of the content and by the buyer of the content. Using a DRM the owner and buyer agree on how the purchased content is to be used. Subsequently the DRM system enforces this agreement. Possible rights on the content are for example ability to burn content to CD-R, ability to export to a portable device (secure and non secure devices) and time limited usage.

7.1.2 Current Situation

The client software is running on a Windows environment (2000, XP) while the server software is Java (J2EE) based and therefore can run on Linux, Unix and Windows32 systems. A working version had been delivered by the external party, however there was no operational environment, i.e. the working version could not be rebuild or updated from the received source code because specific building scripts were missing.

The received solution had some serious issues. The main issues were as follows.

Lacking documentation

Incomplete development environment

Insufficient robustness / responsiveness / quality

Lacking some essential functionality

Several components with incomplete functionality

Proprietary solutions were used instead of standardized solutions

Yet it was assessed that the received solution still offered the opportunity to have a shorter time to market than developing everything from scratch.

7.1.3 Goals

The goal of the project is to realize a robust, market ready client-server solution for on-line media distribution to PC and mobile devices with a state of the art DRM solution by refactoring a received partial solution to enable a shorter time-frame than developing such a solution from scratch.

In addition a controlled development environment should be available to allow for maintaining and updating the software. A controlled environment here means

Page 27: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 27 of 141

WP1 Partners Public 29/08/2008

automatically retrieving as much information as possible about the state of the software, and using it to steer development.

7.2 Solution

7.2.1 Approach

Several refactoring techniques are used to achieve this goal like

Software metrics that indicate the quality of various parts of the software, with tools support: QAC & JTest for static code checking and lines of code measurement; CTC++ and Clover for measuring the Code Coverage of the tests.

Change-metrics to guarantee a certain quality level during refactoring. These change-metrics are derived from the variations over time of the software metrics mentioned above.

Code duplication detection tools (with a home-grown tool)

Automatic documentation extraction (javadoc, doxygen)

Test driven development on components that needed to be changed / redesigned

Automated tests during release procedures to prevent regression and to guarantee a certain quality level

Redesigning critical (sub) components using I-Mathic, a tool for translating requirements of a software (sub) system into a state-model that can be formally verified

A weekly automatic release procedure to generate a new software version for validation

The first three bullets are coved in [5] Chapter 4, and bullets 5 and 6 are covered in Chapter 6.4 of the same document.

7.2.2 Expected Benefits

The approach described in the previous paragraph should result in a controlled development environment from where product releases can be done on a regular basis (e.g. twice a year) for a longer period of time (years). It should also allow for product diversification, i.e. release different product versions from the same base product, each product serving a specific market segment.

Furthermore the following is expected

Test driven development, automated testing, software and change metrics and code duplication detection will solve the original issue of an incomplete development environment and will give concrete information on the quality level of the product.

Automatic documentation extraction from the source code will make sure the documentation is easier to create and maintain and closer to the actual implementation.

Test driven development and the use of I-Mathic will allow to extend the functionality of the product in a controlled way (i.e. without loosing quality) and to re-factor specific components.

Formally verifying critical components will reduce the chance on re-factoring errors and increase quality.

Page 28: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 28 of 141

WP1 Partners Public 29/08/2008

Releasing new version of the software on a regular basis in combination with automated test will allow testing the product in a continuous way to guarantee product quality and to verify the new and re-factored functionality.

7.2.3 Major Results

In general the chosen approach did offer the expected results, however on specific points the results were somewhat different than expected.

Software metrics to measure the quality of various parts of the software. The static code checker tools did indeed produce a lot of useful information but because of the amount of code received it was cumbersome and time-consuming to separate the important information from the less important information. This problem was a consequence of receiving a large quantity of software at once which directly causes a backlog of static code issues.

Change-metrics to guarantee a certain quality level during refactoring. Lines of code metrics offered some information: the risk is small if the amount of changes is small. But this information is limited because code was added and removed at the same time, which made it difficult to extract useful information from it. The variations of other metrics measured by the static code analysis tools indicate progress of certain aspects of the refactoring work.

Code duplication detection tools. This tool produced especially in the beginning a lot of information on software parts that needed refactoring because there was a lot of code duplication. Once the refactoring process was ongoing, the tool was less useful, because we did not duplicate code.

Automatic documentation extraction (javadoc, doxygen). In our experience it is best to have both high level documentation in the form of Word documents with UML diagrams and low level documentation that specifies all the details of the interfaces. If low level documentation is desired, then automatic documentation extraction is the best way to produce it, even if it requires the discipline of keeping it up to date.

Test driven development on components that needed to be changed / redesigned. Worked fine in some cases and not so well in other cases, mainly depending on the engineers. It requires new development competencies that should be trained, and the required change of attitude that takes time.

Automated tests during release procedure to prevent regression and to guarantee a certain quality level. This worked very well as expected. All automated tests had to pass before a version was released to the customer.

Redesigning critical (sub) components using I-Mathic, a tool for translating requirements of a software (sub) system into a state-model that can be formally verified. This approach did work. Details can be found in the recommendation section.

A weekly automatic release procedure to generate a new software version for validation. This worked very well as expected and allowed the customer to closely follow the status of the software because each released version was a working version albeit with some incomplete functionality and some known issues that still needed to be solved.

Page 29: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 29 of 141

WP1 Partners Public 29/08/2008

7.2.4 Success Indicators

The main goals of the project have been reached. A controlled development environment was established and software versions were released on a weekly basis. Metrics and on line documentation were automatically created for all versions.

However due to changing business directions and priorities, the software was not commercially deployed. Therefore formal product releases to the market, product diversification and product upgrades, although prepared, were not realized.

7.3 Conclusion

7.3.1 Summary

Most of the benefits were realized but not all benefits are as good as expected. The change in business directions and priorities prevented the full validation of the controlled development environment.

7.3.2 Lessons Learned

Using an approach where test driven development and automated testing is used and where software and change metrics are maintained, will solve the original issue of an incomplete development environment and will give concrete information on the quality level of the product. A concrete status overview of the product was available at all times, which includes all the test results and all metrics.

Automatic documentation extraction from the source code will make sure the documentation is easier to create and maintain and better consistent with the actual implementation. Extracted documentation was available at all times, and provided low level documentation for users of the code. But still the problem existed of code being changed without changing the documentation lines in the same files.

Test driven development and the use of a formal tool will allow to extend the functionality of the product in a controlled way (i.e. without loosing quality) and to re-factor specific components. Some development teams were very good at refactoring with this approach. Other development teams had difficulties with adopting this approach, especially with defining the correct tests before re-factoring.

Formally verifying critical components will reduce the risk of re-factoring errors and increase quality. Those components that were formally verified did offer an increased quality. However these components were combined with other components. Since these were not formally verified, it could still happen that the product did not work as expected because of incorrect component interface assumptions.

Releasing new versions of the software on a regular basis in combination with automated test will allow testing the product in a continuous way to guarantee product quality and to verify the new and re-factored functionality. This resulted in a lot of useful feedback on the status of the software and the required changes allowing prioritizing the developments.

Page 30: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 30 of 141

WP1 Partners Public 29/08/2008

7.3.3 Final Recommendations

7.3.3.1 Development environment

A controlled development environment that gives feedback on the status of the product is of course useful for any software development. However in case of refactoring where the developers are unfamiliar with the code and with limited documentation it is essential.

7.3.3.2 Development techniques

Test driven development for refactoring puts an automatic focus on the interfaces of a component. This is good but requires a new / different development attitude that takes time to adopt. In some occasions this time might not be available.

7.3.3.3 Development tools

Formal tools for translating requirements of a software (sub) system into a state-model that can be formally verified are very useful for refactoring but put some important conditions on the development process before the really become efficient. Specifically:

More time is spent on interface definitions

o Definitions must be very clear, verified and preferably validated

o Definitions only contain what is really needed

Assumptions become explicit

o Tool stimulates a more formal way of defining the system

o Derived requirements can be defined

More validation during the design process

o Easy to rerun the tool once more after a change

Testing focuses on incomplete requirements (design errors) and less on programming errors

Testing of components around component under re-design and the component itself is easier

o Much more clear what needs to be tested

o Separation of specific responsibilities between components is stimulated

Development effort remains the same but quality improves, is the experience (but this is hard to prove with hard numbers).

Interface definitions become even more important

o The I-Mathic tool loses its value if interfaces are wrong

o More focus needed on validating system boundaries by using the tool (even before generating code with the tool)

Page 31: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 31 of 141

WP1 Partners Public 29/08/2008

8 Migrating MIP from Java to Microsoft.NET

8.1 Problem Statement

8.1.1 Domain

Philips Healthcare is a sector within Philips that delivers a wide range of health care solutions to the market. Within Healthcare, the traditional core is formed by product groups making imaging systems. These hospital systems produce images of the inside of the human body by various technologies, such as X-Ray, MRI, CT and Ultrasound.

MIP is a Medical Imaging Platform for all imaging systems within Philips Healthcare. It provides generic connectivity, printing, database, servicing and viewing functionality that can be used for all these systems. MIP delivers its platform to the product groups in a half-yearly heartbeat.

This case study concerns refactoring this MIP platform from the Sun‘s Java platform to Microsoft .NET. This entails translating code from the Java programming language to C#, replacing Java libraries by .NET libraries, and adapting internal interfaces where necessary.

8.1.2 Current Situation

A few years ago, the MIP code base was entirely written in Java, and interfaced with the client code from product groups through COM technology. (In order for the product groups to use the MIP platform via COM interfaces, MIP contained wrappers converting Java-style interfaces to COM.)

The MIP software stack comprises of two layers, namely a Base layer and a Top layer, as illustrated in the following diagram using the UML lollipop notation.

Java libs

Base

Top

Java-libs

Java-style

interfaces

Figure 1. MIP software stack comprising of the two layers Base and Top.

The Base layer contains generic facilities and utilities. The Top layer actually contains several segments for different functional areas, like connectivity, printing, servicing and so on. The Top layer is dependent on the Base layer, but not the other way around. The segments within the Top layer are not dependent on other segments, but only on the Base layer. In its turn the Base layer is dependent on the Java libraries.

The usage of Java libraries and types appears everywhere in the software stack, including in the interfaces exposed by the Base layer. For example, the type JavaList1 1 The actual name of the type is different, but we use the name JavaList here for clarity and brevity.

Page 32: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 32 of 141

WP1 Partners Public 29/08/2008

from the Java libraries occurs in many interfaces the Base layer provides. We summarize this by saying that the Base layer offers a Java-style set of interfaces. Obviously, the Top layer uses these interfaces.

At that point it was decided to migrate to Microsoft .NET technology, both for implementation of the MIP code base and the interface with the product groups, replacing COM. There were two important reasons for this decision:

1. As a consequence of the battle between Microsoft and Sun, Microsoft stopped supporting the Microsoft Java Virtual Machine, which was essential for MIP‘s Java code to offer COM interfaces to the product groups.

2. In order to improve code reuse and UI harmonization, MIP had to be extended with a GUI framework. It was preferred to build this GUI framework on top of .NET rather than Java, because it would be easier for the product group‘s C++ code to use a .NET based GUI framework.

The choice for .NET still left several choices for the programming language, like Visual Basic, J#, C++ and C#. Microsoft clearly positioned C# as its preferred language, with best IDE support and most new language features on the horizon. That is one reason why we also chose to adopt C# as the new implementation language for the MIP platform.

So now the problem is how to migrate MIP from Java to the .NET technology and C#. Microsoft‘s good news is that the .NET framework offers a certain form of compatibility with Java. To be precise, .NET includes a variant of the Java language, called J#, and .NET offers J# libraries which mimic the Java libraries. So existing Java code can run without too much effort on the .NET framework, basically by taking the Java code and compile as J#.

The bad news, however, is that these J# libraries are not an integral part of the .NET framework. Instead, they offer functionality duplicate to what is in the .NET libraries proper. For example, .NET‘s J# libraries have a JavaList class and .NET‘s recommended libraries have a DotNetList class. The two list classes are conceptually the same but have a slightly different interface. In the code, these types are not interchangeable (but could be converted to each other at some performance cost).

Given this, we considered whether we could keep on using the J# libraries for existing code, next to using the recommended libraries for new functionality. But as indicated above, the J# types (like JavaList) do appear in the interfaces exposed by the old code, making it difficult for new code to use them. So we decided to set as goal to fully get rid of the J# libraries and our Java-style interfaces. Similarly it is an option to retain existing code in J# rather than converting it to C#. But having two languages in one platform was considered too expensive too maintain, e.g. considering the cost of education for software engineers. Furthermore, it would complicate leveraging the (future) features of C#, since C# and J# cannot be freely mixed. So we chose to convert all the code to C#.

Summarizing, the migration of MIP from Java to .NET entails

translating all Java code to C#,

replacing all usage of the Java libraries by usage of the proper .NET libraries,

replacing all our Java-style interfaces to new .NET style interfaces (improving them where possible)

From this point it was clear this migration would be a large scale refactoring effort, covering about 1 million lines of code, several years of lead-time, and many man-years of effort.

Page 33: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 33 of 141

WP1 Partners Public 29/08/2008

8.1.3 Goals

The goal of this case study is to do the refactoring of the Medical Imaging Platform from Java to .NET with least disturbance to the normal development of this platform. In particular:

1. Normal development of new functionality should continue while refactoring,

2. The normal release heartbeat with its major releases should continue while refactoring, so all existing functionality should remain working

3. Refactoring effort (cost) should be kept down.

4. Impact on overall efficiency of normal development should be minimized

8.2 Solution

8.2.1 Alternative approach

The most obvious, brute force, approach is the following:

1. Either stop all normal development (new functionality, bug fixing) on MIP, or let this development proceed on another software archive.

2. Refactor all MIP code from Java to C# and proper .NET libraries. This would take a considerable amount of time (a year or so), in which the MIP code would not even compile.

3. Make MIP compilable again.

4. Test the new MIP software stack, and solve the problems (which have inevitably been introduced by refactoring).

5. If normal development was not stopped in step 1, this developed code needs to be merged with the migrated software stack. Typically, the changes done in normal development also need to be refactored.

This is a ‗big bang‘ approach.

The drawbacks of this big bang approach are:

A long period of time with a software stack that is not even compilable, let alone runnable.

Harder problem solving. The testing in step 4 may reveal problems which have been introduced a long time ago (say at the start of the refactoring), which makes these problems harder to solve.

All-or-nothing. If some product requires some part of the new technology in some part of the software stack, this product can only be made when the whole refactoring has been completed.

―Running behind the facts‖: in case normal development proceeds, it will be done on basis of the old technology. This is grueling, since it is already known that it needs to be refactored to the new technology. So the total effort increases because the normal development cannot be done right away with the new technology.

All in all, it is clear that the big bang approach does not satisfy the goals at all.

Page 34: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 34 of 141

WP1 Partners Public 29/08/2008

8.2.2 Approach and expected benefits

Instead of following a big bang approach, we did a gradual migration from Java to .NET. At intermediate points, major releases could be done, even though some parts of the code were ‗in migration‘, still using J#, J# libraries or Java-style interfaces.

The following diagram shows this gradual migration, with five different points in time, and four refactoring steps. Each of the stacks has the Java and/or .NET technology, the Base and the Top layer, with the style of interfaces they support.

Java

Base

Top

Java-libs

Java-style

.Net

Base

Top

Java-libs

Java-style

.NET-libs

.Net

Base

Top

Java-libs

Java-style

.NET-libs

.NET-style

.Net

Base

Top

Java-libs

Java-style

.NET-

libs

.NET-style

.Net

Base

Top

Java-libs .NET-libs

.NET-style

Stack 1 Stack 3 Stack 4 Stack 5Stack 2

Figure 2. Four refactoring steps of the gradual migration.

Explanation of the stacks:

Stack 1. This is the starting point. For example, Base components work with JavaLists.

Stack 2. The Java platform has been replaced by the .NET platform, using the .NET J# libraries and the J# language (only). So the code still uses JavaLists. In theory, this should not be any effort.

Stack 3. Let the Base layer use the proper .NET interfaces and provide .NET-style interfaces next to the Java-style interfaces. So at the end of this step, all Base component provide two styles of interfaces; both the Java-style and the .NET style. Often, wrappers play a role in effectively realizing this. But still, the step to realize this stack involves considerable effort.

Stack 4. The Top layer uses only the .NET style interfaces. So the top layer has been refactored to use the .NET style libraries. In practice, the Top layer consisted of several segments, which each could be refactored independently. So it also costs considerable effort to realize this, but this can span several releases.

Stack 5. The Base layer does not offer the .NET style interface anymore, and removes the dependency on the Java libraries; the transition to the new technology is complete. The step to get here may cost considerable effort, but can also span several releases.

The expected benefit is that we can achieve the goals mentioned in the previous section. In particular, this solution has the following advantages over the brute force solution:

The software stack stays compilable all the time.

The software stack remains functional all the time. This makes it possible to release products at any point in time during the refactoring process (although there may be performance considerations). Furthermore, it leads to easier problem solving. The average time between changing code and executing the changed code in a test (and potentially revealing a problem) is much smaller here, than with the big bang approach.

Page 35: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 35 of 141

WP1 Partners Public 29/08/2008

Earlier leveraging of new technology. From stack 3 onwards, components in the top layer can use the new technology.

Right-in-one-time. Normal development can write its code using the new technology quite soon; Base development from stack 2 onwards, and Top development from stack 3 onwards, without having to be refactored afterwards.

There is also a disadvantage:

Extra effort for components to support two types of interfaces, e.g. effort for creating wrappers to translate JavaLists to DotNetLists (which are thrown away after the migration completes).

8.2.3 Major Results and Success Indicators

The overall goal has been achieved:

MIP is now 100% C# and does not use the J# libraries anymore (nor the Java-style interfaces).

The success indicators mentioned above show the following:

1. 4 major releases were done while this refactoring was on-going (the first one roughly corresponded to stack 3, and the other three corresponded to mixtures between stack 3 and 4)

2. One heartbeat (out of five) was skipped because of the initial step of switching to the .NET platform took one year, even while using the .NET Java-compatibility libraries. The effort to come to stacks 2 and 3 was underestimated.

3. The refactoring took 20 man-years of effort (in 2½ years throughput time). A rough estimate of the refactoring rate is about 30 lines of code per hour.

4. Functional development during refactoring continued with little disturbance.

Some other observations:

It was harder to understand code during refactoring, because of temporary code duplication and wrappers between Java and C#.

There were no major problems introduced while refactoring (but many small ones). This can be explained by the fact that only implementation problems are introduced, an (almost) no design problems, which are much harder to solve.

Above we stated that the initial step of switching to .NET was underestimated. One of the reasons is that the transformation from Java to J# was not as smooth as expected. Microsoft promises full compatibility, but the practice is not so.

8.3 Conclusion

8.3.1 Summary

The large MIP software stack has been successfully migrated from Java to .NET in a gradual manner, using .NET‘s support for Java during intermediate steps. Most goals have been fully met. The only deviation was the late release of MIP after the first migration step, because the effort to realize this was underestimated.

So the approach delivered what we expected. There were no deviations from the approach worth mentioning.

Page 36: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 36 of 141

WP1 Partners Public 29/08/2008

8.3.2 Lessons Learned

We learned the following lessons:

Actually, the approach described above is something we learned during the progress of the case study; during the migration the approach was somewhat implicit and subconscious. Only later, explaining the migration to a wider audience in the Serious workshops, the approach became more explicit (and more clear to us too!).

This approach above works well.

The initial steps of adopting a new technology often take more than one expects.

8.3.3 Final Recommendations

For a large scale software migration, try to follow this approach. This approach has been described as a refactoring pattern called ―Gradual Migration of a Software Stack‖ [6].

Page 37: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 37 of 141

WP1 Partners Public 29/08/2008

9 Migrating to a Graphical User Interface

9.1 Problem Statement

9.1.1 Domain

This case study covers the area of MRI service applications. The service applications suite is used throughout the whole lifetime of the system, covering the production, installation, preventive- and corrective- maintenance phases. The System Test and Tuning procedures (STT), part of service applications suite, is the specific subject in this case study. The technical domain is in the area of re-factoring the business logic such that modern state-of-the-art user interfaces techniques can be applied at any time in the future fit4future.

9.1.2 Current Situation

The STT application is used by a large variety of users. Some users do have basic system knowledge, others are expert users. Most users are Philips employees, others are 3rd party service organizations. Most of the Field Service Engineers (FSE) maintain MRI products only, others do service other products (X-RAY, CT, NM) as well.

Another aspect that needs to be addressed by the STT application is the variety in the various types of hardware components. This variety exists due to 2 reasons:

The features of a system (e.g. low-end versus high-end).

Possible upgrade paths of a system, still containing the ‗initial‘ delivered hardware components, but running a more recent software release.

The impact on the user interface of the STT application should be kept to a minimum wherever feasible and realistic.

The STT application is built on top of windows technology, mainly MFC for the MRI VT emulator. The business logic of the STT application is written in C and C++. The majority of the STT application is build using a legacy library to build console based applications adhering to the de-facto VT-220 terminal standard, 24 lines of 80 characters. The effort spent to build this application throughout the years was huge, and re-factoring the STT code base is or can be a time consuming activity. The business logic is partly running on Microsoft XP and partly on VxWorks real time operating system.

In the picture below you will find the main menu of the STT application. The inner box is the VT220 emulation, the buttons around it map to the function keys of a (virtual) VT220 keyboard. More buttons are shown depending on the context of the STT application.

Page 38: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 38 of 141

WP1 Partners Public 29/08/2008

Figure 3 MRI VT emulator

The legacy MRI VT-emulator provides the service engineer a graphical user interface with buttons and VT220 based graphs. Many very MRI specific and very advanced features are built in into this MRI VT-emulator. E.g. clicking on a number in a menu tree even submits the number and double clicking on a number automatically selects the number, just like a ‗real‘ graphical user interface does.

Apart from the good things also the limited aspects of the VT based user interface were felt:

1. User interface and workflow is not harmonized across different product lines of Philips Healthcare.

2. Displaying of graphical data, drawings and images, which could help the service engineer to do his job quicker and better, is not possible.

3. Limited area for text and help information.

4. Remotely executing of the STT text based user interface is complicated in the Philips Remote Service Network due to the fact that the application is not fully compatible with the telnet protocol.

From design point of view two other concerns came up: 1. No clear separation between business logic and user interface design.

2. Different (more than one) concepts on how to build a System Test and Tuning procedure did exist in the software code base.

Below you will find a drawing (Figure 4) containing the main components of the STT application.

Page 39: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 39 of 141

WP1 Partners Public 29/08/2008

Figure 4 Deployment View STT application

9.1.3 Goals and Expected benefits

The goal is to replace the text based user interface of the STT application by a graphical user interface. This should enable the field service engineer to execute STT procedures with better guidance (more text, more graphical data) of how to perform the procedures.

Within Philips Healthcare an integrated field service framework was developed late 90‘s. It is adopted by the Philips global service organization as the standard framework to build service applications for Philips medical devices. This framework defines a common user interface technology including the basic concepts of workflow.

The ultimate goal is to have a fully integrated STT application in the field service framework. For that a clear separation of business logic and user interface is required. The separation of business logic and user interface logic also enables two independent teams to work on their own project requirements.

Page 40: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 40 of 141

WP1 Partners Public 29/08/2008

9.2 Solution

9.2.1 Approach

The goal of replacing the character based user interface by a graphical user interface was achieved by the following approach:

Follow an evolutionary development approach by which each project only addresses limited functionality in order to spread the number of required resources and reduce project risk.

Having two development teams, one focusing on business logic and simple text based user interface and the other on the advanced graphical user interface. The team focusing on the business logic resides in Best, The Netherlands. The team focusing on the advanced graphical user interface resides in Bangalore, India.

Follow a dedicated design pattern which that enables a strict separation of user interface and business logic.

9.2.1.1 Evolutionary Development

The effort required to re-factor the STT business logic was considered to be very high, while initially no additional features were added. This was the background for choosing an evolutionary approach by which for every product release (about once per year) only a small part was selected to be re-factored. Selection was discussed with the stake holders and project managers and was always a compromise between required functionality and available resources. A typical time bound project execution. The following projects were defined:

1. The first project designed the mrtest design pattern. The main focus was on designing the business logic for a new subsystem with a VT220 user interface. From a user interface point of view the advantage for the end users was minimal. This project ran in 2000-2001 and was executed by the Dutch development team.

2. The second project used the mrtest design pattern for another new subsystem using the Field Service Framework for the graphical user interface. The project ran in 2001-2003, mainly in India.

3. The theme in the third project was generalization. The mrtest design patterns were further developed and the VT220 and Field Service Framework application were generalized in such a manner that they could handle (almost) all future test. The project ran in 2003-2005 executed by both engineering teams.

4. The fourth project focused on all the remaining business logic ported to the mrtest model except for the very specialized test procedures. In the latter case a ‗short cut‘ was taken to start the VT220 based user interface directly from the graphical user interface. So from the end user perspective the field service framework user interface was the entry point to the STT suite. The project ran in 2005-2007, mainly executed in India while the Dutch engineering was really adding more business logic without any concern about the user interface technology to be used. From this project onwards both teams could work in depended of each other to great extend.

5. In the fifth project the Field Service Framework application was ported from a web base technology to a .NET (.Net Remoting and WinForms) technology.

Page 41: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 41 of 141

WP1 Partners Public 29/08/2008

This new generation of Field Service Framework provided additional features to re-factor also the very specialized STT test procedures. The project will run in 2007-2008, mainly in India.

6. The sixth project should remove the last VT220 based STT test procedure. This is the moment that the VT220 legacy code can be removed from the code archive. The project is planned to run in 2008-2009. The project needs to run in close cooperation between the India and the Dutch engineering teams because the remaining test cases are very specific and requires special attention, especially from a requirements point of view. The separation of business logic and user interface is crucial for this part of the application. New features of the FSF.NET generation of the Field Service Framework application need to be incorporated in this product. Once that has been implemented and the requirements have been defined the teams can start working on it in their own pace to great extend.

9.2.1.2 Team organisation

The mrtest design pattern is enables to organize teams according to the specific areas. Two teams were assigned to further develop the STT application. The business logic is developed in the Netherlands. From the past decades much domain knowledge was available in the STT applications area which was a good starting point to do re-factoring and further deploying of the business logic. The graphical user interface application (Field Service Framework) was already developed in India. Also for this reason it was an explicit decision to build the graphical user interface for the STT application in India. The following figures give an impression about the team size although throughout the years the team sizes did vary:

The Netherlands:

1 project manager.

3 designers.

8 developers.

India:

1 project manager.

1 designer

4 developers

One designer from the Netherlands crew is assigned to organize and maintain the communication between the teams. Specific task of this designer is:

Project start-up and definition, discussion on what is required and feasible given time bound projects and resources available. The high-level requirements and overall design decisions are taken during this phase. Especially during this phase face to face meetings were held to get a better and clearer understanding of issues.

Knowledge transfer and training of the Bangalore engineering team on requirements and design level.

Knowledge transfer and training of the Best engineering team especially on the ‗how to deploy and use‘ the new graphical user interfaces.

Have documentation and code review sessions.

Page 42: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 42 of 141

WP1 Partners Public 29/08/2008

Organize and perform acceptance testing from Bangalore deliveries.

Keep in close contacts by having regular meetings (teleconferences) with both teams to prevent miscommunication and frustration as early as possible.

Face to face meetings were organized to improve the communication between the teams, for about once or twice per year depending on the need of the project.

9.2.1.3 MRtest design pattern

The execution of a system test is modeled as a flow. The drawing below does show the basic elements, called nodes.

Figure 5 Basic STT flow

For every test a specific flow of basic elements is designed in order to implement the requirements.

Figure 6 Enable of a complex STT flow

In the drawing you will find a conceptual class diagram of the mrtest design pattern. The pattern is to have two base classes: one to contain the business logic of a test (mrtest), the other to send data to the user interface mrprogress. The application has to construct an mrprogress class which has a generic interface but a specialized implementation for every type of user interface technology.

Another aspect of the mrtest design pattern is that only classes that have an implementation that fits to the system configuration can be created by the mrtest class factory (factory design pattern). By doing so the hard- or software configuration dependencies are handled. Classes that have a valid construction can run on the system. This information is used by application to build a list of ‗capabilities’ of the system.

Page 43: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 43 of 141

WP1 Partners Public 29/08/2008

Figure 7 Simplified class diagram mrtest design pattern

The initialize() is called once at the start of a flow, the exit() is called once at the end of the flow. GetParameters() is called to retrieve a set of parameters that must be shown on the user interfaces during the Parameter Editor node. Depending on the state of the test, the set of parameters can vary. ExecuteStep(StepNumber) is called with the execute node. During the execution of the ExecuteStep() dynamic user interface updates are sent to the user interface, basically attribute/value pairs.

The user interface application is responsible for executing the defined flow and displaying the instruction, parameter editor, execution phase and the test results. Flow definition files are defined to describe the flow for all types of user interface technologies. Special measures have been taken to provide only text for a VT220 user interface and more advantage text including pictures for the graphical user interface technology.

9.2.2 Major Results

Project A up to D are finished and its products are commercially available. A huge part (> 90%) of the STT functionality is available through the field service application. In the remaining cases the MRI VT emulator is started by the framework instead of a native field service framework implementation. This approach is only applicable for applications running on the MRI console. A large part of the STT business logic has been re-factored to the mrtest design pattern.

An example of the new user interface is depicted in Figure 8 Instruction node, Figure 9 Parameter editor and Figure 10 Test results node. A test procedure has been defined to adjust the volume of the speaker system. Below you will find some screen shots.

Page 44: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 44 of 141

WP1 Partners Public 29/08/2008

Figure 8 Instruction node

Figure 9 Parameter editor node

Page 45: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 45 of 141

WP1 Partners Public 29/08/2008

Figure 10 Test results node

As a very good side effect of the effort spent, a STT batch controller has been developed. The STT batch controller can execute any STT test procedure from a scripting environment (COM technology). This has been achieved by re-using the concepts and implementations of the re-factored code.

The project E, moving to a new user interface technology (FSF.NET) is in progress and will bring a more harmonized user interface with other Philips Healthcare product lines. Also important is an even more intuitive and easier to use user interface. The business logic can be used as is of today which proves that the business logic and user interface technology are fully decoupled.

9.2.3 Success Indicators

The goals have been completed:

Applied new user interface technology (> 90% of STT test procedures).

Independent development of business logic and user interface application.

The success factors have been:

Clear design approach

A well established communication structure between the two engineering teams, both formal and informal.

9.3 Conclusion

9.3.1 Summary

The goal of the project has been achieved partly because some implementation is still in progress. But still the observations so far do point in a good direction. Especially the possibility to show graphical elements, like images and drawings during the instruction nodes in the flow is appreciated very much by the end users.

Developers of the STT‘s business logic can do their job without bothering on the user interface technology. This enables fast development and easy extension of the STT business logic.

Page 46: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 46 of 141

WP1 Partners Public 29/08/2008

9.3.2 Lessons Learned

From the design point of view we have learned that the mrtest design pattern was easy to understand and has proven to be very useful to decouple the business logic from the user interface technology.

Looking back several projects and the equal number of commercial products did exist. End users (factory and service engineers) have seen variations in the products. This consequence is not appreciated always and reported as annoying by some end users. Especially for service engineers who have to maintain several product lines in concurrent for the coming 5 years. It could be compared to situation where you have to cope with Windows 3.1, Windows 95, including Vista day by day.

Project wise the decision for executing time bound projects worked very well, was good to manage, and had very explicit milestones defined which ended up in good quality and well performing functionality.

9.3.3 Final Recommendations

The recommendation for future projects is still to work with time bound projects and clear designs. An important aspect is to select a part that is realistic to implement given the resources and lead time of the project. It is also crucial to have a software designer/architect who is the linking pin between the teams.

Page 47: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 47 of 141

WP1 Partners Public 29/08/2008

10 Refactoring from HW to distributed SW platform

10.1 Introduction

The main topic of the ITEA SERIOUS project is software evolution and refactoring. The SERIOUS project is organized in such a way that software engineering methods and tools for refactoring are developed at research centers and universities, and that experiences (practical cases) are brought in by the industrial partners, when possible using these newly developed methods. This deliverable describes such a large-scale industrial case study.

One of the case studies carried out at Philips Healthcare concerns the imaging subsystem of X-ray devices.2 Our product portfolio of X-ray scanners can be considered as a product family. The imaging platform of today, which serves multiple members of the family, in particular for cardio-vascular systems, are equipped with ASICs3, optimized for speed and performance and dedicated to the so-called classical image enhancement algorithms (temporal noise reduction, image subtraction and adaptive edging).

Recently, that more and more advanced image processing (IP) algorithms, e.g. capable of extreme powerful noise reduction, enter the arena. The (somewhat outdated) ASIC-architecture is not able to incorporate these new algorithms, which have a high value from a marketing point of view. Moreover the architecture is too costly, compared for instance with today‘s multi-core PC-technology.

For obvious reasons we want to migrate to a more future-proof, i.e. more open, scalable, and flexible architecture. Therefore, it was decided 2 years ago, after a thorough feasibility study [7], to replace today‘s hardware-based imaging platform by a software-based solution. The BRICS4 case study, basically a major platform migration effort, should realize this new software-based platform for image processing.

10.2 Problem Statement

10.2.1 Domain

In this section the domain will be described in terms of the major external trends and one internal business re-engineering aspect that is taking place right now.

10.2.1.1 External trend 1: Medical procedures become less invasive.

The on-board imaging of cardio-vascular X-ray is mainly characterized by hard real-time requirements (see 3.2 for more precise members). These requirements mainly come from interventional usage of the X-ray scanners, which is quite different from just taking pictures of the human body to do a diagnosis. A typical example of interventional usage is catheterization, a medical procedure in which vessels such as coronaries that are usually severely occluded with plaque are opened up again with a 2 Other case studies of Philips done in the context of the SERIOUS project can be found in [8]. 3 An ASIC is an application-specific IC. A special mask has to be made for an ASIC. It does not have the flexibility of a CPU-like IC. 4 BRICS: Building a Real-time Imaging Component Suite.

Page 48: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 48 of 141

WP1 Partners Public 29/08/2008

stent5. During this procedure clinical users navigate with guide wires in the human body with the aid of X-ray images displayed on a several monitors. Placing and opening up the stent is also evaluated with live X-ray images. In short, doctors rely in this kind of applications completely on medical images to do their job.

10.2.1.2 External trend 2: Lower the X-ray dose.

Needless to say that dose is harmful to both patients and staff. There has always been a clear-cut trade-off between image quality (IQ) and dose. By lowering the dose, IQ is sacrificed, mainly because noise is added. With classic ‗Photoshop-like‘ IP, the noise level can be reduced, but this goes at the expense of two other important IQ drivers: sharpness and contrast.

However with the more advanced (and computationally intensive) IP that becomes available, noise can be reduced significantly, while almost not effecting sharpness and contrast. In other words, with state-of-the-art IP it is not dose versus IQ anymore, but low dose and IQ (to a certain extent of course).

10.2.1.3 External trend 3: More pixels (of course).

Ever more and more pixels (flexible image formats) are required to enhance quality and easy-of-use for clinical users even further, just as in photography.

10.2.1.4 Business re-engineering: A common reference architecture.

The business re-engineering program covers many functional areas, marketing and sales, customer service, logistics, etc. The most important item for the development department is the ―mandate‖ that all X-ray products should adhere to a common reference architecture (see e.g. [9] [10]). The IP subsystem is one major building block within this common reference architecture

10.2.2 Current Situation

The current imaging subsystem supports multiple video streams (typically 2), each up to 30 frames per second (fps) in a 1K2 pixel format, with a pipeline latency of less than 150 ms (needed for eye-hand coordination when inserting catheters). The architecture is based on ASICs.

The major problem is that this (somewhat outdated) ASIC-architecture is not able to incorporate the new IP algorithms, which have a high value from a marketing point of view. A new platform is needed which enable a fast introduction of the latest IP functionality. Further, there is a strong drive towards cost reduction, so an important constraint is that this new platform should be based on standard hardware components, such as for instance multi-core PCs.

10.2.3 Goals and Expected benefits

The challenge of the BRICS case study is to realize a new IP subsystem as a platform (it should serve multiple X-ray products) that is more open, scalable, and flexible, with advanced state-of-the-art IP algorithms on board, fulfilling these tough performance requirements.

The goal is to build a software-only component suite, consisting of a framework, tools and utilities and an ever growing collection of IP modules that can be plugged-in.

5 Officially this procedure is called PTCA: Percutaneous transluminal coronary angioplasty; a procedure with a balloon-tipped catheter to enlarge a narrowing in a coronary artery.

Page 49: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 49 of 141

WP1 Partners Public 29/08/2008

1. The framework provides the ―middleware‖ for implementing 2D image processing algorithms on PCs. When IP tasks have to be done in parallel (to achieve the required throughput with a low-latency) the framework takes care of dividing images into strips which are then processed in parallel on a potentially distributed PC system.

2. Each IP module implements (a part of) an image processing algorithm. Within the aforementioned framework, these IP modules are stringed together to form a so-called graph.

3. The set of tools is used to aid in developing and testing the BRICS component suite, and of applications building on top of it. Utilities provide common functionality like logging.

The most important benefits of the software-based IP platform are:

1. Almost by definition having a platform implies ease of re-using components, in our case IP modules.

2. Because the platform is open, it significantly shortens the time to transfer a new algorithm, moulded into an IP module, from research to the product.

3. IP modules can be made suitable for any size (in principle), whereas the old ASIC architecture could only process 5122, 10242 and 20482 images.

4. Scalability. When more performance is required (algorithm uses more CPU cycles) it can be achieved by plugging-in more hardware (to a certain extent of course).

5. There is a clear separation of algorithm design (i.e. creating an IP module) and algorithm usage in an application (assembling the graph, in which the IP modules are run). This is done via the framework layer, which also takes care of all parameter handling.

6. The end-user does not directly benefit. The first releases contain no new functionality, in terms of new IP algorithms. However, once the refactoring or re-architecting effort has been done new IP features can be implemented more quickly.

10.3 Solution

10.3.1 Approach

The BRICS case study started in January 2006. Gradually the project scaled-up. Now it is grown to a medium-sized software project with about 15 FTEs. The code-base is approximately 300 KLOC (kilo lines of code). BRICS is currently working on its fifth increment.

Besides the known professional tooling in modern large-scale software projects (ClearCase for configuration management coupled to ClearQuest for PR handling, TICS for code checking, etc) we added to following very specific process ingredients to the BRICS project.

10.3.1.1 Incremental development with a half-year heartbeat.

The following diagram depicts the way how maintenance of previous releases and development of new releases will be done in BRICS. There will be two BRICS releases per year. As you can see in the diagram releases and life-cycle management

Page 50: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 50 of 141

WP1 Partners Public 29/08/2008

are purely time-driven. This means that the content will be chosen in such a way that it matches the release schedule.

Release N-1

Release N

Release N+1

Prototyping

MaintenanceDevelopment

MaintenanceDevelopment

Prototyping Development

2006 2007

Prototyping

Figure 11. Way of working in BRICS

The support for releases prior to the last released product will be kept to a minimum; customers will be asked to follow the component suite of the latest release.

There is a need for activities that go beyond development of the next release, where investigations, prototyping etc are needed to find out what the exact specifications of the next release must be, especially for high-risk items. This will be an important activity in BRICS too.

Note that during a project increment (yellow box) all three activities prototyping, development, and maintenance take place.

10.3.1.2 Refactoring algorithms with reference models

In contrast with the software code of the IP modules these reference models do not focus on performance; they just capture the intention of the IP algorithm. As a consequence the code is much more readable compared with the real-time IP modules (as an example, see Figure 12). Hence they can be seen as a way to transfer knowledge about IP. MATLAB was chosen as the de-facto tool for implementing these reference models since it is a well-accepted environment for modeling, visualizing and documenting, in all kind of mathematical-oriented engineering disciplines. In our case, MATLAB allows us to record the tiniest details of an IP algorithm in a precise, concise, and intuitive way.

In the BRICS project reference models are now accepted as executable specifications6 (you can input an image and a processed output image will be generated) and are heavily used for verifying the IQ of the real-time implementations of IP modules.

6 This can not be done with paper specifications! A reference model in MATLAB is also far more precise than a documented IP algorithm.

Page 51: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 51 of 141

WP1 Partners Public 29/08/2008

Moreover, the reference models provide also a means to experiment with various aspects of the algorithm, without being hampered by poor readability of the highly optimized source code of the real-time implementation. They can also be used for upfront tuning of IP parameters by our customers.

Both reference models and the optimized code are kept in sync by regular configuration model tools and a sound SCM process (i.e. in the same way as you keep code and documentation in sync).

Actual implementation

/* Process a Line on a Vector Basis ------------------------------------*/

/* (NbIter = PixelCount/NbPixelInVector = PixelCount/8) ---------*/

/*----------------------------------------------------------------------------------*/

for (Col=8; Col<(PixelCount-8); Col+=8)

{

Vss1 = _mm_load_si128((__m128i *) &AddInR[Col]); // Vss1 = In(i, j+8) (Vss:1)

VEC_MULTIPLY_ROUND(Vss1, VssKer0, VssGVerR, Vss2, Vss3) // VssGVerR = Ker0*In(i,j+8)

Vss1 = _mm_load_si128((__m128i *) &AddInRU1[Col]); // Vss1 = In(i-1, j+8) (Vss:1)

Vss2 = _mm_load_si128((__m128i *) &AddInRD1[Col]); // Vss2 = In(i+1, j+8) (Vss:1,2)

Vss1 = _mm_adds_epi16(Vss1, Vss2); // Vss1 = In1=In(i-1, j+8)+In(i+1,j+8)(Vss:1)

VEC_MULTIPLY_ROUND(Vss1, VssKer1, Vss1, Vss2, Vss3) // Vss1 = Ker1*In1

VssGVerR = _mm_adds_epi16(VssGVerR, Vss1); // VssGVerR += Ker1*In1

Vss1 = _mm_load_si128((__m128i *) &AddInRU2[Col]); // Vss1 = In(i-2, j+8) (Vss:1)

Vss2 = _mm_load_si128((__m128i *) &AddInRD2[Col]); // Vss2 = In(i+2, j+8) (Vss:1,2)

Vss1 = _mm_adds_epi16(Vss1, Vss2); // Vss1 = In2=In(i-2, j+8)+In(i+2,j+8)(Vss:1)

VEC_MULTIPLY_ROUND(Vss1, VssKer2, Vss1, Vss2, Vss3) // Vss1 = Ker2*In2

VssGVerR = _mm_adds_epi16(VssGVerR, Vss1); // VssGVerR += Ker2*In2

Vss1 = _mm_load_si128((__m128i *) &AddInRU3[Col]); // Vss1 = In(i-3, j+8) (Vss:1)

Vss2 = _mm_load_si128((__m128i *) &AddInRD3[Col]); // Vss2 = In(i+3, j+8) (Vss:1,2)

Vss1 = _mm_adds_epi16(Vss1, Vss2); // Vss1 = In3=In(i-3, j+8)+In(i+3,j+8)(Vss:1)

VEC_MULTIPLY_ROUND(Vss1, VssKer3, Vss1, Vss2, Vss3) // Vss1 = Ker3*In3

VssGVerR = _mm_adds_epi16(VssGVerR, Vss1); // VssGVerR += Ker3*In3

VEC_MULTIPLY_ROUND(VssGVer, VssKer0, VssGHor, Vss2, Vss3)// VssGHor = Ker0*GV(i,j)

COMBINE_VEC_1(VssGVer, VssGVerR, VssGVer, VssGVerL, Vss1, Vss2, ssTmpa, ssTmpb)

// Vss1 = GV(i+1,j), Vss2 = GV(i-1,j) (Vss:1,2)

Vss1 = _mm_adds_epi16(Vss1, Vss2); // Vss1 = GV1 = GV(i-1,j) + GV(i+1,j) (Vss:1)

VEC_MULTIPLY_ROUND(Vss1, VssKer1, Vss1, Vss2, Vss3) // Vss1 = Ker1*GV1

VssGHor = _mm_adds_epi16(VssGHor, Vss1); // VssGHor += Ker1*GV1

COMBINE_VEC_2(VssGVer, VssGVerR, VssGVer, VssGVerL, Vss1, Vss2, Vss3, Vss4)

// Vss1 = GV(i+2,j), Vss2 = GV(i-2,j) (Vss:1,2)

Vss1 = _mm_adds_epi16(Vss1, Vss2); // Vss1 = GV2 = GV(i-2,j) + GV(i+2,j) (Vss:1)

VEC_MULTIPLY_ROUND(Vss1, VssKer2, Vss1, Vss2, Vss3) // Vss1 = Ker2*GV2

VssGHor = _mm_adds_epi16(VssGHor, Vss1); // VssGHor += Ker2*GV2

COMBINE_VEC_3(VssGVer, VssGVerR, VssGVer, VssGVerL, Vss1, Vss2, Vss3, Vss4)

// Vss1 = GV(i+3,j), Vss2 = GV(i-3,j) (Vss:1,2)

Vss1 = _mm_adds_epi16(Vss1, Vss2); // Vss1 = GV3 = GV(i-3,j) + GV(i+3,j) (Vss:1)

VEC_MULTIPLY_ROUND(Vss1, VssKer3, Vss1, Vss2, Vss3) // Vss1 = Ker3*GV3

VssGHor = _mm_adds_epi16(VssGHor, Vss1); // VssGHor += Ker3*GV3

_mm_store_si128((__m128i *) &AddOut[Col], VssGHor); // VssGHor -> *AddOut

VssGVerL = VssGVer;

VssGVer = VssGVerR;

}

Figure 12. Code snippet. Left: IP module. Right: Same code in the reference model.

10.3.1.3 Testing with reference configurations (close to typical application use).

In BRICS a number of components are developed which can be used in multiple systems with different requirements. These components (e.g. IP modules) are developed as much as possible as separate entities with their own life cycle and are tested separately as well.

But, it is of course not enough to test the components only in complete isolation. There are interfaces between components and in a product the components are used in combination to build systems. Therefore, the set of components is tested with a number of reference configurations (graphs). These typical configurations are a prediction of what customer products will need as system functionality.

To prevent the situation that the number of configurations gets too high and unmanageable, the components are released as a component suite.

Reference model

%Define the convolution kernel

variance = max( 0.6, params.Sigma0 - 0.2*(level-1) );

x = -3:3;

Gauss1DKernel = exp( -(x.^2)/(2*variance) );

Gauss1DKernel = Gauss1DKernel / sum( Gauss1DKernel(:) );

% Apply a 7-by-7 normalized Gaussian kernel, note that it is separable!

Gn = conv2( Gauss1DKernel, Gauss1DKernel, Hn, 'same‗ );

Page 52: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 52 of 141

WP1 Partners Public 29/08/2008

Ref config. 1 Ref config. 2 Ref config. 3

BRICS Component Suite

Figure 13. BRICS component suite, testing in typical reference configurations.

10.3.2 Results

The major result is a completely refactored image processing platform, right from architecture level up to the actual code. The tough performance requirements for real-time IP could be met by using highly optimized code (SSE-2 instructions on a multi-core architecture).

Let‘s elaborate a bit on this. The new BRICS platform can be divided into 4 main parts (see also Figure 14):

1. Framework shown in the centre of Figure 14 implements the important ―divide and rule‖ philosophy. It divides images into strips which are then processed in parallel on different nodes (cores) in a PC network, and takes care of all the administration that is needed to do this properly. In this way low-latency IP is realized.

2. IP modules implement IP algorithms (can be both classic and advanced algorithms). They are conceived as stand-alone packages that can be plugged-in. They can be chained into image processing graphs.

3. Tools comprise the set of applications that are needed during development, testing or integration of the BRICS components. They are not deployed in an end-product (i.e. X-ray machine).

4. Utilities provide common functionality like logging.

Page 53: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 53 of 141

WP1 Partners Public 29/08/2008

Uti

liti

es

Tools

Framework

IPModule

1

IPModule

2

IPModule

3etc

Algorithm Development

System Integration

BRICS Test Team

BRICS 1-page Architecture

graph i/f

Figure 14. Composition of DRIVE, with the 3 types of use cases.

The nice thing about the BRICS platform is that it acknowledges and addresses 3 use cases in a very explicit way. That is, they are mapped on it quite naturally on the architecture. Algorithm developers use (read: should adhere to) the common interface that should be provided by every IP module. The typical system integrator actor uses the BRICS component suite to build application-specific graphs, starting and stopping them, sending parameters etc., and also to integrate the IP modules into a complete X-ray system. The BRICS team, in particular testers, hook-in on the tools.

This is all very nice, but the bottom-line question is always: What can we do with the system? How does it help the end-users, i.e. doctors to easy their work and improve the quality of diagnosis?

In order to give you a flavor of what can be done with the BRICS component suite, without going into the details of the image processing, two cases with real-life medical images are presented below.

Page 54: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 54 of 141

WP1 Partners Public 29/08/2008

Figure 15. Image enhancement with the BRICS component suite.

In the first case (see Figure 15) images of the coronary artery are shown, filled-up with contrast agent, and a guide-wire (bottom right).The left image is unprocessed, the right image is enhanced with the BRICS software. Note the improved contrast and sharpness.

The second case (see Figure 16) demonstrates a composite image made by the BRICS software. It shows the lumen of the carotid artery as a white string on a dark-grey background. On top of this lumen a guide-wire is superimposed.

The lumen or roadmap image was created by tracing a set of images with the contrast bolus, subtracting the background in order to remove disturbing bone structures, and finally inverting this intermediate result. The last step consists of an overlay of life subtracted images only showing the guide-wire.

The advantage of these composite images is that the lumen image has to be made only once (assuming that the patient doesn‘t move), saving toxic contrast agent, which is harmful for the patient. This roadmap can now be used again and again by the clinical user for navigating his guide-wire.

Page 55: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 55 of 141

WP1 Partners Public 29/08/2008

Figure 16. A typical trace-subtract image (composite), made with the BRICS software.

10.3.3 Success Indicators

Besides the normal project tracking we did not measure specific metrics for refactoring and/or platform migration.

The ultimate and most important indicator is of course whether the project deliverable, i.e. the BRICS component suite, has made it to a commercial product sold on the market.

In the first 2 increments (Q2-2006 and Q4-2006) it was very hard to get on par with the ‗good-old‘ ASIC architecture (in particular with respect to performance). In the 3rd increment (Q2-2007) this goal was achieved. The 4th increment and above are scheduled to end up in new products. This means that we have right now an IP platform that enables us to innovate with IQ quite fast, by including new, better IP algorithms, and giving Philips X-ray scanners an outstanding market position.7

7 On the ASIC-architecture it was really hard to change the IP. In 2005 and 2006 we had to implement a few IP extensions (not even that complex). One of these items was a direct response to improvements made by our main competitor, the other one was a cost reduction item. Both these extensions had a lead time of more than a year (which was of course far too long from a business point of view, but we had no choice). This proves that the hardware-based IP-solution was indeed becoming in an end-of-live stage and that we are in a far better position now to innovate with the BRICS component suite.

Page 56: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 56 of 141

WP1 Partners Public 29/08/2008

10.4 Conclusion

10.4.1 Summary

In retrospect 4 phases can be discerned in the BRICS. Each phase resulted in a complete release. The focus of each phase was different.

Phase 1: Collect (not integrate) all the code from the previous feasibility study.

Phase 2: Make a stable framework.

Phase 3: Productize utilities and classic IP algorithms.

Phase 4: Add new IP algorithms.

This last phase is what we aimed for, so in the end the goal is reached. With a software-based solution for real-time image processing faster innovations are now possible.

10.4.2 Lessons Learned

The number one lesson learned is that it is not easy to come up with a new platform that substitutes the old solution, which has been optimized over the years, completely. You have to invest heavily. It is like crossing the dessert with a small bottle of water. You do not know beforehand where all the hurdles are that you have to take and you do not know exactly how long the road is (i.e. when is your new platform mature enough). But once you have crossed it, it feels like being in paradise.

Other lesson learned:

Reference models work well for refactoring algorithms. It might even be considered a pattern which can be added to [6].

Releasing a component suite as a whole prevents a lot of maintenance work. It is not forbidden for customers of the BRICS software to use components of release X and X+n together but in these cases not guarantee is given. Also problem reports comings from mixed release situations are not accepted.

Testing reference situations kept the amount of test work acceptable.

10.4.3 Final Recommendations

Our most important recommendations are listed below:

Refactoring should be a continuous activity, ideally an amount of at least 10% should be reserved for this to prevent design and code erosion.

Invest in defining high-quality architecture.

Personnel changes, especially regarding the key roles in the project, should be kept to a minimum (actually this happened too frequently in this case study).

Support and belief of management in refactoring and platform migration is essential. Realize that it is largely a matter of patience and especially endurance.

Page 57: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 57 of 141

WP1 Partners Public 29/08/2008

11 Application of Concern Analysis

Identifying, understanding, and managing evolutional changes in large software systems is challenging. When adding or changing a particular functionality, its implications to the rest of the system should be carefully analyzed. The parts affected by the change may be scattered over the system artifacts, making the task even more challenging.

The analysis is always done from a specific viewpoint and having specific questions in mind. A natural way to express various topics of interest in a system is using the simplest possible abstraction, the set. When the topics of interest – hereafter called concerns – are expressed as sets of system elements, the process of representing and extracting information about a software system can be established on well-understood set theory. Assuming that the concerns have meaningful contents from the viewpoint of a stakeholder, the concerns constitute a higher abstraction layer for the software system, providing different views for different stakeholders. Intuitively understandable queries can be constructed using such concerns, revealing new derived information on the system.

We have developed a concern-based approach to support software comprehension and analysis. From a concrete point of view, a concern is a set of software system elements that form a unit of interest for some stakeholder. Concerns can be overlapping and nested in an arbitrary way. The elements belonging to a concern may be in principle any elements included in the software artifacts, like individual requirements, UML model elements, code fragments, XML elements etc.

The approach and provided tool support allows the software engineer to define the concerns she is interested in, either manually or assisted by the tool. Based on an existing set of concerns, she can query on the relationships and evolution of the concerns, e.g. to find out how a change in a certain functionality influences other functionalities. In essence, compared to existing tools supporting software comprehension, we propose the use of an additional abstraction layer, consisting of a set of concerns, on the top of the software model. The actual analysis is then done by querying this concern layer. We demonstrate the applicability of the approach and tool support by using them to analyze the source file structures of an industrial large-scale product platform and products built on top of this platform. A detailed description of the approach can be found in [14].

One of the main benefits of the proposed approach is that it supports the analyzer to gradually build up an understanding of the software model to be analyzed. The concerns in our approach can be seen as the basic building blocks and tools to support the analysis and to build a mental model of the (parts of the) software under analysis. New concerns can be easily constructed by applying set operations to the concerns. The user can, for instance, merge two concerns, build an intersection of them, or ―subtract‖ one concern or its parts from the other.

The main steps in the concern-based approach are

1. Constructing the concern-enabled artifacts,

2. Identifying and creating the concerns in those artifacts, and

3. Analyzing the concern library using concern operations, especially queries.

The concern toolset was implemented as an extension of an existing architecting environment, INARI [13]. INARI is a prototype architecting environment supporting the representation of various structures that are not explicit in a software system. INARI is

Page 58: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 58 of 141

WP1 Partners Public 29/08/2008

built on top of Eclipse platform and it uses Rational Software Architect and its UML diagram capabilities extensively.

11.1 Problem Statement

11.1.1 Domain

A case study was undertaken to test the applicability of the concern manipulation toolset in a practical environment. The toolset was to be tested on a reverse engineered model of Nokia‘s ISA software platform. ISA, the software platform of a major mobile phone product family of Nokia, is a proprietary software platform created and maintained by Nokia.

11.1.2 Current Situation

ISA can be understood as a collection of compilation units containing modules, each of which is dependent on a number of header files. At some point during the development, a design decision was made to place the different header files in a single, global directory to streamline the compilation. However, as the platform evolved and grew, the number of header files placed in the global directory multiplied and reached a size that started to cause increasingly amount of problems.

11.1.3 Goals and Expected benefits

The primary objective of the case study was to help solve this problem: to reduce the amount of dependencies between ISA's global header file repository and the different modules. To achieve this, Nokia Research Center (NRC) created a UML model that focused on presenting the hierarchy between different compilation units and modules and the dependencies each of those modules have with the header files. This was done using reverse engineering tools developed at Nokia (ref. [16]). One of the goals of this case study was to identify header files that were endpoints for only a small amount of modules (0-3 dependencies). Another goal was to find the specific compilation unit, under which all modules dependant on a specific header file resided. The results could then be used to find an optimal location for each header file in the hierarchy of compilation units.

11.2 Solution

11.2.1 Approach

For concern querying, we implemented support for standard set operations as the core of a concern-based query language in. Of the mathematical set operations, the ones that are focused on in our work are the union, intersection and difference operations. Their concern counterparts are defined in Table 1, with the corresponding set operation in parenthesis. A fourth operation, nearest neighborhood (later referred to as neighborhood), was also implemented. Unlike the other operations, the neighborhood operation is not derived from mathematical set theory, but is rather an operation that was proven to be very useful in the course of our work. A common feature for all of these operations is that they take existing concerns as their parameters and return the result as a new concern.

The list of possible operations is in no way limited to the ones we decided to implement. For example, one of the unused mathematical set operations, the Cartesian product, is sensible in the concern realm as well. However, it was left unimplemented as it did not seem be useful in the scenarios we studied.

Page 59: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 59 of 141

WP1 Partners Public 29/08/2008

Table 1. Concern operations

Operation name

Symbol

Type Description Commutative

Merge (union) + Binary Merging of concerns A and B results in a concern containing all the elements of A and B, excluding duplicates.

Yes

Overlap (intersection)

& Binary Overlapping A and B results in a concern containing all the elements that are common to both A and B.

Yes

Slice (difference)

- Binary Slicing of concern A with B results in a concern that contains the elements that belong to A but do not belong to B.

No

Nearest neighborhood

| Unary The nearest neighborhood of concern A results in a concern that contains all elements that have a relationship with an element in A and that do not belong to A, including the direct or indirect parents of these elements. There can be different versions of this operation for different kinds of relationships.

-

All of the operations may be chained to form more complex expressions. The precedence between the binary operations is not defined (i.e. the precedence is from left to right), whereas the unary operation takes natural precedence before the binary ones. The order of execution can be altered by using parentheses. It is also important to note that the laws of distributivity and associativity for mathematical set operations are also valid for the concern operations.

As mentioned in Section 12.2.3, the model analyzed was a reverse engineered UML model that presented the hierarchy between different compilation units and modules and the dependencies each of those modules have with the header files. Therefore, all the concerns consisted of UML class elements only.

The key in solving the particular problem at hand was the neighborhood operation. Running the neighborhood operation on a concern that consists of a header file returns a concern that contains every module that has a dependency relationship with that header file. The result concern also contains information of the structural nested location of the found modules. The pattern view of the INARI tool thus displays the result for a single header file in a convenient form, exhibiting the users of a header file together with their position in the system hierarchy.

Some modifications to the original concern query tool were made to facilitate running the operation for all of the header files as a batch run. It was decided that due to the thousands of resulting concerns, the results of the batch run would be presented in a more convenient format instead of the result patterns in the INARI tool. It was decided that browsing the results in INARI would be too slow, both usability and performance-wise. Instead, an XML-based result format was developed, allowing only the essential parts of the resulting data to be presented.

We also studied the use of other concern operations in the context of this case study. The studied scenarios include tracking the evolution of the concerns in the ISA model, as well as studying how different product configurations relate to one another.

Page 60: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 60 of 141

WP1 Partners Public 29/08/2008

11.2.2 Major Results

All goals set for the case study (discussed in Section 11.1.3) were met. The main results of the case study are the following:

1. Information on the header dependencies in different modules. The information was used at Nokia for estimating the possible refactoring options.

2. Revised version of CMT (Concern Manipulation Toolset) tool. Based on the needs and results gained in the case study, CMT was extended by a neighborhood operation. In addition, a component that generated an additional view, namely an HTML representation that allowed easy navigations of the dependency chains, was implemented.

3. An HTML-based view to the findings. This view was easy to comprehend and allowed navigations of the dependency chains of interest. The findings were mainly communicated through this view.

4. An MSc thesis [14]

5. A publication [15]

11.2.3 Success Indicators

The main success indicators were the strong commitment shown by the researchers, good and active collaboration with a nationally funded Inari research project and its researchers, and good and active collaboration with Nokia.

11.3 Conclusion

11.3.1 Summary

Reverse and re-engineering needs and projects have motivated the development of various tools that support managing program comprehension tasks. We have developed a concern-based querying approach and tool support (CMT) to support software comprehension. The first step in this approach is to construct the initial set of concerns that are later used for querying. The concern library built is extensible. New concerns can be constructed e.g. by applying specific operations (including set operations) on the existing concerns. The model to be analyzed, given in UML in our current implementation, is left untouched. That is, the concerns are bound to model elements instead of annotating the model elements with information about the concerns. This makes our solution scalable. We have applied the approach to analyzed header and source file dependencies in Nokia‘s large-scale product platform, ISA.

ISA can be understood as a collection of compilation units containing modules, each of which is dependent on a number of header files. At some point during the development, a design decision was made to place the different header files in a single, global directory to streamline the compilation. However, as the platform evolved and grew, the number of header files placed in the global directory multiplied and reached a size that started to cause increasingly amount of problems.

The primary objective of the case study was to help solve this problem: to reduce the amount of dependencies between ISA's global header file repository and the different modules.

The case study was estimated as successful by both Nokia and TUT. All the goals set were met. Also additional results were gained. These results included e.g. valuable information on the usefulness of the applied approach and tool (CMT) and ideas for

Page 61: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 61 of 141

WP1 Partners Public 29/08/2008

further development of it. Also, the scalability of CMT was proved. The subject system under study, ISA, is large software system. Over the years it has evolved from a small system with hundreds of component and less then one million lines of code to a large complex system with thousands of components and several-million-lines of code. The concern-based analysis of the reverse engineered model of ISA was conveniently done using CMT.

11.3.2 Lessons Learned

One of the learning of this case study was that especially with large models and large amount of data in general, careful attention should be put on how the information is shown to different stakeholders. In the case of our case study, for instance, some of the concern queries resulted in thousands of concerns. Therefore, it was decided that the results of the batch run would be presented in a more convenient format instead of the result patterns in the INARI tool. It was assumed that browsing the results in INARI would be too slow, both usability and performance-wise. Thus, an XML-based result format was developed, allowing only the essential parts of the resulting data to be presented. For each header file, we would only present information on the name of the file, the amount of modules dependent on it, the topmost compilation unit containing those models, and the location of the dependent modules in the compile unit hierarchy. This XML-data was then transformed into HTML using XSL transformation to allow the analysis of the results using any Web browser. In HTML-form, and with the help of some JavaScript, the result data could be ordered by the different features or filtered according to the amount of dependencies.

Another lesson learnt was that tool support that helps the software analyzer to gradually build up and understanding of the software model of interest is quite beneficial, and in some cases even necessary. The concerns in our approach can be seen as the basic building blocks and tools to support the analysis and to build a mental model of the (parts of the) software under analysis. New concerns can be easily constructed by applying set operations to the concerns. The user can, for instance, merge two concerns, build an intersection of them, or ―subtract‖ one concern or its parts from the other.

Our other lessons learned are:

CMT tool works well for concern-based software analysis tasks.

The concerns, and concern queries in particular, helps the analyzer to gradually build an understanding of the subject system.

A concern-based software analysis approach is scalable.

It is important have support for both temporal and persistent queries and their results. The former encourages ―playing around with concerns‖, which is not only convenient from the point of view of the analyzer but also highly useful for learning the subject system. It is also important that the results of such queries can also be saved if desired.

The concerns themselves support communication on the findings.

11.3.3 Final Recommendations

Our most important recommendations are listed below:

Choose software analysis tools that are extensible.

Choose software analysis tools that are scalable.

Page 62: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 62 of 141

WP1 Partners Public 29/08/2008

Try to integrate software analysis practices and tools as tightly with software maintenance tasks and tools as possible.

For presenting the final results, carefully select the formats for different stakeholders.

Page 63: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 63 of 141

WP1 Partners Public 29/08/2008

12 Analysis of Nokia Maemo platform

12.1 Problem Statement

12.1.1 Domain

We investigate the maemo platform, an open source development platform for Linux-based handhelds such as Internet Tablets. It is built from open source components with some modifications to integrate and run with the target handhelds and their OS. On top of these existing open source components, the Nokia open source development team has been working on the UI and application framework (Hildon) and some more custom libraries (named osso and maemo).

Figure 17. Maemo architecture as documented on the main website http://maemo.org

12.1.2 Current Situation

The maemo platform is an interesting case study for quality trend analysis, as it is composed of building blocks that have been living on for quite some time as well as building blocks that are relatively new. Due to the large size of the application (estimated at 3MLOC), this case study serves as scalability test for the Fact Extraction Tool Chain (Fetch), a static analysis tool that has been developed within SERIOUS to support a.o. trend analysis.

12.1.3 Goals and Expected benefits

In this case study, we aim for 3 goals:

1. Recovering the architecture of the application in terms of building blocks and their interaction, throughout time, to observe whether and how the application changes.

2. Monitor a selection of metrics following the D3.7 ISO 9126 approach to

Page 64: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 64 of 141

WP1 Partners Public 29/08/2008

observe how the internal quality changes over time. We hypothesize that the quality decreases over time.

3. Monitor the time and resources required for the analysis.

12.2 Solution

12.2.1 Approach

The approach that we undertake is based upon static analysis using the (Fetch). With this tool, we build up a model from the source code to achieve a higher level of abstraction to reason about the system. Figure 18 presents the meta-model that is used in Fetch, targeted towards (hybrid) OO systems. As maemo is entirely written in C, in practice the obtained model will be in a limited number of meta model entities that apply to procedural languages, such as Modules, Files, Functions, Global Variables, etc.

Figure 18. Metamodel used in Fetch

We compose a model for each of 8 snapshots of the system over the period of 2 years (quarterly between January 2006 and October 2007).

After building up models for each of these snapshots, we use the query engine crocopat included in Fetch to (i) recover the architecture and compare the result with the documented architecture, (ii) apply trend analysis for a selection of internal metrics.

We use the graph layout engine of the graphviz software to compose a graph where lower layer building blocks (offering services to others) are at the bottom of the graph, while building blocks at higher levels (consumers) are put at the top of the graph.

For the quality analysis, we captured the metrics Lines of Code (LOC) and Cyclomatic Complexity (CC) per function as provided by pmccabe (part of Fetch) and processed them into a box plot using the R statistical environment. Such a boxplot is a representation of multiple descriptive statistics:

Median: The bold line in the middle of each box plot represents the median value.

Page 65: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 65 of 141

WP1 Partners Public 29/08/2008

Quartile boundaries: The rectangle surrounding the bold line sets bounds to the first and third quartile. Each quartile represents 25% of the population.

Outliers: Outliers are values that are exceptional (e.g., fall outside the range of +/- 1.5 times the Inter Quartile Range or IQR. IQR is calculated as the difference between the third and first quartile.)

Both procedures are completely automated, i.e. only require a developer to launch a script, passing the root source code directory as parameter.

12.2.2 Major Results

12.2.2.1 Re-documentation

To re-document the architecture of the platform, we query the model for a module interdependency view. In such a view, the interaction between modules is visualized. Modules are building blocks of the platform that can be identified as a coherent unit

The maemo platform consists of open source software components that are brought together to build up a complete desktop environment. As such, we consider each of the individual pieces of open source software as modules.

The interaction between modules A and B (directed) is defined as either

a function of module A calling a function of module B

a function of module A accessing a data member of module B

Because of the focus on the overall architecture rather than analyzing individual dependencies, we remove transitive interactions. I.e., if module A depends upon module B and module B depends upon module C, then the interaction between A and C will not be shown if such a direct interaction exists.

2006-01-01 2006-04-01

2006-07-01 2006-10-01

Page 66: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 66 of 141

WP1 Partners Public 29/08/2008

2007-01-01

2007-04-01

2007-07-01

2007-10-01

Figure 19. Dependency view throughout time.

The results of the dependency view throughout time are striking: over the course of two years, the system has quickly emerged into a platform of more than 75 components. The system shows a series of layers that have remained constant over time.

From the graphs, we can roughly extract three layers of building blocks. The first layer consists of low-end libraries that form the interface between hardware and operating system at one side, and UI components at the other side. Secondly, we identify UI components and libraries. Finally, we distinguish end-user applications.

In the first category, we distinguish the libraries glib, gt+, python, dbus, gnome-vfs, libxml, gconf, etc. As such, we can confirm the platform-layer in the architectural documentation. The source code of the X-server is stored externally as it can not be found back here. Furthermore, we can identify the dependencies between these platform libraries.

Page 67: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 67 of 141

WP1 Partners Public 29/08/2008

Secondly, we identify UI building blocks, such as libOSSO, hildon libs, maemo-theme-tools, libglade, hildon control panel, etc.

From the model, we can not identify many end-user applications. Still, we see how such application would depend on other building blocks, derived from the positions of maemo-examples and hello-world in the dependency graph.

12.2.2.2 Quality Analysis

Long functions - Figure 20 and Figure 21 show the distribution of function sizes between the first and last snapshot in the form of a boxplot. In the given figures, severe outliers (above 200 LOC) are left out to allow readers to interpret the given boxplots. Assessing the distribution of snapshots, we deduce that not much change in distribution happened: the median lies around 25, outliers start at around 75.

Figure 20. Boxplot for the distribution of function size (LOC) for the 2006-01-01

snapshot

Figure 21. Boxplot for the distribution of function size (LOC) for the 2007-10-01

snapshot

Continuing with the outliers, Table 1 and Table 2 show the top 10 outliers for long functions in the first and last snapshot respectively. Five functions occur in both lists, two of which have been increasing in size over time. From these lists, we deduce that the some outliers have been growing; some new ones have been introduced while three top 10 outliers have remained stable.

Table 1. Long functions: outliers for 2006-01-01

Name LOC

process_node(uint32_t)' 9677

main(int ,char*[ ] )' 8446

main(int ,char**)' 5815

gather_proc_info()' 3832process_socket(uint32_t)' 2577

readdev(int )' 1429

ncache_load()' 1396

regex_compile(char*,size_t ,long,re_pat tern_buffer*)' 1395

gdk_event_translate(GdkDisplay*,MSG*,gint*)' 1339

gtk_widget_class_init (GtkWidgetClass*)' 1247

Page 68: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 68 of 141

WP1 Partners Public 29/08/2008

Table 2. Long functions: outliers for 2007-10-01

Name LOC

main(int ,char**)' 14470main(int ,char*[ ] )' 13873

process_node(long)' 9677

DB_associate(DBObject*,PyObject*,PyObject*)' 4821

gather_proc_info()' 3832

sock_getsockopt(PySocketSockObject*,PyObject*)' 3380draw(cairo_t*,int ,int )' 2678

process_socket(long)' 2577

zgesdd_(char*,integer*,integer*,doublecomplex*,integer*,doublereal*,doublecomplex*,integer*,doublecomplex*,integer*,doublecomplex*,integer*,doublereal*,integer*,integer*)'2497

main(void)' 2098

Figure 22. Boxplot for the distribution of cyclomatic complexity for the 2006-01-01

snapshot

Figure 23. Boxplot for the distribution of cyclomatic complexity for the 2007-10-01

snapshot

Looking further into the outliers we notice that 335 functions out of 18487 have cyclomatic complexity of 30 or more, which is often stated as a maximum to keep a function testable (994 out of 50087 in the last snapshot).

Table 3. Long Cyclomatic complexity per function: outliers for 2006-01-01

Name CC

process_node(uint32_t)' 2233

gather_proc_info()' 695

main(int ,char*[ ] )' 632

process_socket(uint32_t)' 561main(int ,char**)' 312

append_rule_from_element(BusConfigParser*,char*,char**,char**,int ,DBusError*)'291

regex_compile(char*,size_t ,long,re_pat tern_buffer*)' 291

gdk_event_translate(GdkDisplay*,MSG*,gint*)' 255

g_win32_get locale(void)' 235

gdk_event_translate(GdkDisplay*,GdkEvent*,XEvent*,int )' 230

Page 69: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 69 of 141

WP1 Partners Public 29/08/2008

Table 4. Cyclomatic complexity per function: outliers for 2007-10-01

Name CC

process_node(long)' 2233

main(int ,char**)' 1833

main(int ,char*[ ] )' 1055

gather_proc_info()' 695process_socket(long)' 561

initcl(void)' 530

doProlog(XML_ParserStruct*,ENCODING*,char*,char*,int ,char*,char**,char)'529

DB_associate(DBObject*,PyObject*,PyObject*)' 410

inital(void)' 404

PyEval_EvalFrameEx(PyFrameObject*,int )' 399

From the figures, we notice how the most complex function (with a CC of more than 2000) has remained constant over time, yet seven of the top ten functions have undergone complexity-increasing changes.

Building up and querying a model for this large system was a challenging task for Fetch. Constructing the model, for last and largest snapshot, took about 36 hours. Querying for the dependency view lasted less than an hour, while the metrics scripts took – in the worst case – about 1 month of processing time. For the queries, we needed about 2 Gb of main memory.

12.2.3 Success Indicators

Recovering the architecture of maemo via a dependency view proved to be a success, as we were able to represent the tens of modules and their interaction in a easy to interpret way that allowed us to confirm the architectural documentation that we found on the maemo website.

Using pmccabe and R, we used the box plot notation to compare the distributions of function metrics across time. Although we noticed that the Q2Q3 quartiles box as well as the median did not shift too much, we did notice the growing complexity of metric outliers. As such, we identified those functions that are most in refactoring need.

12.3 Conclusion

12.3.1 Summary

In this case study, we evaluated the use of Fetch, a static analysis tool developed during SERIOUS, for the architectural recovery and quality trend analysis of large, evolving systems.

Based upon the source code, we could construct an architectural dependency view that matches the architectural documentation that was available, yet in more detail. Furthermore, we identified functions that were in need of refactoring, as they were large and complex yet increasing with every snapshot.

12.3.2 Lessons Learned

Handling large systems with Fetch has been shown to be doable, yet some queries require an unacceptable amount of time. Based on these results, these queries have been written in a more optimal way, but still require tens of days to run. Better performing data backends than the flexible RSF data format together with the graph query engine may be needed to handle such cases.

Page 70: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 70 of 141

WP1 Partners Public 29/08/2008

13 Architecture Recovery of a Legacy Imaging System

13.1 Problem Statement

13.1.1 Domain

Rationally and cost-effectively planning and controlling changes throughout the software evolution process is one of the hard-core challenges that Software Engineering has to tackle. Dealing with changes is particularly difficult when those who have to apply them do not understand the software system. And by far and large this is not a rare case. Properly documented software is not the norm. As a result, most of the relevant information is buried in the heads of those who took part in the original development.

This case study proposes a methodology for comprehending the underlying architecture of a system, by means of visualization techniques and abstraction methods. It will be executed on a medical imaging product, based on Java, supplied by Ibermática. It is currently in use in several Spanish hospitals. The system allows doctors to visualize high resolution medical images and manipulate them applying several transformations.

This case study will document the architecture of the existing system, with a special focus on the quality-related aspects. The architecture recovery process will be based on QAR (Que-ES Architecture Recovery), a generic recovery workflow based on the traditional Extract-Abstract-Present paradigm. The process will be adapted to the specifics of the system. Some highlights of the case study are:

Use of general purpose, widely-used, visual tools. Instead of the existing recovery-specific frameworks, such as Moose or Rigi, the case study will choose some well-know modeling or profiling tools, such as Omondo UML or Eclipse TPTP. These tools allow a continuous visualization of the system, which should be very valuable in this type of processes. Thus, this case study will evaluate their suitability for architecture recovery activities.

Combined analysis of static and dynamic views.

Use of software metrics to aid the recovery process and evaluate the quality of the system.

13.1.2 Current Situation

Program understanding has been a notorious problem since the early days of programming and has been thoroughly treated by the research community. One of the main areas of research for addressing this issue is architecture recovery (or architecture reconstruction), which is a discipline within the reverse engineering domain geared to retrieve the underlying architecture of existing systems.

Traditional architecture recovery methods are tailored to the enterprise domain, targeting large systems with generally large amounts of legacy code. The generally accepted procedure for dealing with these systems is the Extract – Abstract – Present paradigm [18]:

Extract: Extraction techniques are used to gather raw data regarding the system‘s architecture. The main sources for extraction are source code –and source code

Page 71: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 71 of 141

WP1 Partners Public 29/08/2008

repositories-, available documentation, experts and runtime behavior of the system.

Abstract: Perform queries, filters and transformations to the raw extracted model in order to refine it.

Present: Create representations of the recovered architecture in an easily understandable form, allowing a visualization of the system. Diagrams can be focused alternatively in different concerns to provide different views of the system.

Some of the most representative architecture recovery processes found in the literature are Symphony, SAR, and QAR.

Symphony [19] is a process model for reconstructing software architecture views [20]. It is the result of a joint effort for defining a unified method for architecture reconstruction. The process is composed of five phases, covering both the design of the reconstruction process (where the process designer identifies the views and viewpoints of interest) and its execution (performed by the reconstructor).

Krikhaar‘s SAR method [21] adopts the extract-abstract-present paradigm, aiming to address industrial needs at Philips Healthcare. In his reconstruction process, the source code (which is the main source of information for recovery) is analyzed and reduced to manageable units of information, called InfoPacks. These units are processed with Partition Relation Algebra to obtain relevant aspects of the software architecture, which are called ArchiSpects, and usually employ Partition Relation Algebra.

QAR (QUE-es Architecture Recovery) [22] is a generic architecture recovery workflow that integrates proposals from previous authors. The process follows the extract-abstract-present paradigm, and divides the extraction process in three activities (documentation analysis, static analysis and dynamic analysis). QAR offers a process framework for architecture recovery that can be tailored to the specifics of the application domain.

Several recovery-enabling frameworks and tools have been developed to perform architecture recovery. MOOSE [23] and Rigi [24] are some of the most frequently widely used tools.

MOOSE is an extensible and scalable reengineering environment written in Smalltalk that uses FAMIX as a language-independent meta-model for representing Object Oriented sources. However, the lack of standardization has impeded its adoption by tool providers. Models can be directly generated or imported from external parsers (there are Java, C++ and Cobol interpreters are available). MOOSE stores these models in a repository and provides functionality for browsing, manipulating and storing them to disk. This core architecture is complemented by tools such as Codecrawler [25], which enables advanced visualization and transformation operations.

Rigi is a programmable reverse engineering environment. It extracts information through several language-specific parsers into the Rigi Standard Format (RSF). Rigi can visualize data as hierarchical typed graphs, and lets users navigate the hierarchical models and customize their layout. The user controls the reconstruction process, by means of manually clustering and filtering the less relevant nodes. Rigi also supports automating the visualization and transformation operations with the use of a TCL language (RCL). These features make Rigi the visualization tool of choice for a variety of workbenches, e.g. Dali.

Rigi and MOOSE use different internal representation meta-models although the latter is able to import data in RSF format and is quite difficult to share information between

Page 72: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 72 of 141

WP1 Partners Public 29/08/2008

them. The lack of standardization in this area is a clear barrier for practitioners and tool providers outside the recovery domain.

13.1.3 Goals and Expected benefits

Architecture recovery processes can not be efficiently executed without the aid of tools, as these processes involve data gathering and visualization activities, which can be fully automated or at least semi-automated with human supervision. However, there are parts of the process, specially the abstraction phase, which require human reasoning and thus can not be performed by tools. We have chosen QAR because it is the most generic approach to perform a architecture recovery. In addition, we searched specific Java-based tools for each QAR stage. These tools are Jude [26], Omondo UML Studio [27] and Eclipse TPTP (Test and Performance Tools Platform) [28][29].

Using these tools puts recovery processes at hand for staff that is not familiar with traditional methods and processes. The main limitation of the process is the scalability. It may need some adjustment for being applicable to larger projects.

Concretely in this case study, the legacy imaging system is poorly documented and the architecture doesn‘t exist. The architecture recovery stage will be useful for the evolution of this system. This evolution will be performed in a new case study, and the new system will be base on a SOA (Service Oriented Architecture) approach.

13.2 Solution

13.2.1 Approach

This section describes the experience obtained with the application of the QAR Architecture Recovery process to a case real example. This process has been applied to a medical image viewer from Ibermática.

The QAR (Que-ES Architecture Recovery) defines a generic workflow, made up five types of input data, five processes and four significant results, with the structure shown in Fig.1. This process is designed to be flexible, so each instantiation of the process will vary depending on the existing input data.

The first steps to follow before starting the process should be checking its applicability to the studied system and collect all the available input data, in order to correctly define the instantiated process to follow.

Page 73: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 73 of 141

WP1 Partners Public 29/08/2008

Figure 24. QAR Workflow

The QAR workflow can only be applied to accessible systems (a system is considered accessible when it is well-documented and has its source code available). The latter condition is mandatory for this process, and is satisfied in this case. The only part of the system for which we don't have sources is the DICOM.jar library, but it is only related to loading DICOM format images and is easily understood in the application structure.

However, when validating the first requirement we found that the documentation of the system is scarce, consisting only on the Javadoc from the source code and the user manual. In spite of that, the low size of the system (about 10.000 lines) should qualify it as accessible, although the scope of each individual QAR process will be limited by the available inputs.

Once we have verified the viability of the concrete case study, the next step is to gather a complete list of all the available input sources for the process, as shown in the next chapter.

13.2.2 Major Results

The first step is identifying the available input for our case study, classified in the predefined categories.

13.2.2.1 External input data

Available documentation:

Page 74: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 74 of 141

WP1 Partners Public 29/08/2008

User manual: 13 page document describing the functionality of the application to the users.

Source code documentation (Javadoc)

JUnit Test cases

Documentation and tutorials about the APIs used by the system, specially JAI (Java Advanced Imaging)

Documentation about domain model concepts (medical imaging systems) (i.e. the DICOM standard ,an specification of medical imaging storage and transport standards partially supported by the system)

Table 5 - System's source code

Lines of source code 9973

Number of classes 211

Number of packages 17

Once we have identified the inputs of the process the first conclusion we obtain is that we lack specific inputs for the final process of the workflow, the presentation process. Considering that this instance of the workflow will skip the last process, integrating the objectives of this process into the abstraction process.

13.2.2.2 Information extraction

In this process we analyze the system documentation in order to obtain the conceptual model of the system. The main information source will be the user manual, from which we obtain the following information about the system. We will also extract information from the Javadoc, and from the APIs and domain documentation.

13.2.2.2.1 User manual analysis

The analyzed tool is part of a system, although it can be used as an standalone application. For the purposes of this recovery process we will treat it as an isolated application.

The system is a client tool that allows visualizing medical images, supporting the formats TIFF, BMP, JPEG, and the Medical Standard DICOM image format. The user can display multiple images in the window, apply several transformations to the images and store the modified images in the file system. The program can be started specifying a work folder. In that case, smaller versions of the images will be displayed in the icons bar (Barra de Iconos) and can be displayed in the viewer window (Visor) one by one or the complete group.

The program can display simultaneously up to 16 images. The number of images in screen is dependent on the resolution parameter (1x1, 2x2, 3x3, 4x4). In order to improve the viewing of the images the application allows to zoom in and out, scroll and use a magnifying lens on any of the loaded images.

The user can also apply several transformations to the images in order to ease the diagnosis. The supplied transformations in this version are: Negativa, Ecualizado, Realce, Suavizado, Bordes, Nivel/Ventana, Inversa and Espejo. Excluding the latter two, the resulting image from each operation is created as an additional image in the Visor, as long as there is enough room with the selected resolution. The Inversa and Espejo operations are applied in the selected image instead. These operations don't

Page 75: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 75 of 141

WP1 Partners Public 29/08/2008

support configuration parameters, except the Window/Level (Nivel/Ventana) operation, which can be controlled by the user, either by dragging the mouse over the screen or by configuring the Window and Level numeric values in a dialog with two slides. These values can be stored in the file system for future reutilization.

The user can also show/hide the image information of each displayed image, formed by modality, zoom level, window/level and number of bits.

Finally, the user can save any modified image.

The user has three different input modes for executing these commands, through tool bars, contextual menus and hot keys.

The application's top level GUI element is the main window, which contains a tool bar, a menu bar, and two containers. These containers are the icons bar (Barra de Iconos), where thumbnails of the work folder images are displayed, and the viewer (Visor), which displays a number of images controlled by the resolution.

13.2.2.2.2 Javadoc analysis

The Javadoc study first of all reveals us that there are parts of the code which seem to be taken from external sources and have been adapted in order to fit to the application's needs. We also quickly detect the external APIs used by the code, Swing for the GUI elements and JAI for image loading and processing. After a small search, the external parts of the code come from two sources: Building Imaging Applications with Java source code, a book by Lawrence Rodrigues covering AWT, Java2d and JAI in the imaging applications context, and Sun's JCL code from the jai-demos project from in java.net.

It is precisely in this external classes where we have detected several inconsistency issues between the source code and the documentation, with mistakes such as wrong parameters listed, revealing that the developers have made changes to the external code in order to adapt it to the system but have not always updated the documentation. This fact relegates the Javadoc information to an informative task, although the general information of the classes and the domain elements will be useful.

The source documentation will be used too when analyzing the static view of the system in order to achieve a better understanding of the classes that form the system.

13.2.2.3 Static-view extraction

This process is the most common approach to the architecture recovery discipline. It is executed in parallel with the dynamic-view extraction, so both processes could be theoretically be executed in any order. In this case of study, the tool-assisted work has been firstly executed for both processes, after which there has been an interpretation of the obtained raw data from the tools, combining both views in order to get a better understanding of the system.

So, in a first step, we use several tools to extract an architectural static-view from the system, represented as UML class diagrams. In order to simplify this task we analyze each package separately, including the first level dependencies from the elements of the rest of the system, and we analyze also the inter package dependencies. This data will be complemented with several package metrics that will be used in a future study to measure the quality of the software architecture.

13.2.2.3.1 Tools evaluation

Page 76: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 76 of 141

WP1 Partners Public 29/08/2008

There are numerous tools that can automatically generate these products, so it is necessary to do an initial evaluation of each tool in order to decide the best options. All the analyzed tools can be used freely for academic means, although only the metrics extractors are open source.

Omondo Free Edition 2.1: This Eclipse plug-in is able to generate class diagrams from code, and also package dependence diagrams. The interface is very intuitive and the resulting diagrams are attractive. It is also designed to navigate through the diagrams, dynamically coloring the dependencies between elements in order to distinguish the incoming from the outgoing. Unfortunately there are several problems that make this tool much less desirable: in one side the diagrams can only be exported in SVG, but the main problem is that the plug-in is very buggy, with changes to diagrams lost from time to time, and sporadic hang-ups related to the 'upgrade to commercial' dialogs. This has relegated the program to the point of being utilized only for obtaining the diagrams no other tool could generate: the package dependencies diagram.

Jude Community edition 2.4: Standalone Java application that allows importing source files and generates the resulting class diagrams from code, detecting dependencies and inheritance. The tool has some limitations, as it does not detect dependencies from calls to static methods or method arguments. In the upside the tool is fast, and allows exporting the diagrams in several image formats. Because of these feature it has been the tool of choice for generating class diagrams.

Poseidon for UML 4 Community Edition: This tool has functionality quite similar to Jude, but it is slower and less flexible when exporting the results, so it has been discarded over Jude.

Omondo Studio Edition (Academic License) 2.1: The full version of Omondo includes a lot of additional features, and in a initial evaluation looks much less buggy. Regarding the reverse engineering process, it incorporates to the process analysis of the byte code, which results in improved association detection, including navigability and cardinality information. It also displays qualified associations, and allows exporting in standard image formats. The license for this version arrived too late in the process to be a viable option but it is probably the best option for future applications of the process.

Jdepend 2.9: Open source Java tool that analyzes the source code a project, computing package dependencies, package cycles and obtains package metrics (Abstractness, Stability and Distance). There is an Eclipse Plugin (JDepend4Eclipse) that contributes a perspective to display this information, including a visualization of the metric, although it does not support exporting the information in any format (as does the standard application). It was used in order to obtain package metrics for an analysis in future steps.

Eclipse Metrics Plugin 2.7: Eclipse plug-in that generates reports from projects where several metrics are gathered. The information includes data and graphs for several type and object metrics.

So, after evaluating the available tools we decided that the static-view of the system would be obtained with Jude and Omondo. In order to obtain the package dependencies diagram it is only necessary to open the package dependencies diagram via the contextual menu opened with the source folder selected. The resulting diagram needs to be manually filtered and rearranged in order to improve its readability. As a general advice, packages with many relations with the rest of the code should be placed in the center of the diagram, and all the dependencies between

Page 77: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 77 of 141

WP1 Partners Public 29/08/2008

each duo of packages should be aggregated. The diagram will also be simplified as we detect that some packages are not relevant to the architecture.

Table 6 - Metrics

Tool used Product obtained #

DIAGRAMS

Jude Class diagrams 17 (package level)

Omondo Free Package dependencies diagram 1

METRICS

JDepend Dependency cycles 20 cycles detected

JDepend Efferent – Afferent coupling 17 (package)

JDepend Abstractness 17 (package)

JDepend Distance 17 (package)

Eclipse Metrics Cyclomatic complexity method level

Eclipse Metrics Number of indirection levels method level

Eclipse Metrics Lack of cohesion in methods method level

Eclipse Metrics Number of fields / parameters method level

Eclipse Metrics Lines of code in method method level

In addition to that, we will generate a class diagram for each package, starting with the 'Generate detailed class diagram' option from Jude. These diagrams also must be organized, and in some cases have to be completed with the missing dependencies that Jude does not detect (although it happens few times). These diagrams display all the classes from a package and the directly dependent classes from the rest of the code, so a color code has been applied to each diagram in order to also improve readability.

As it has been said before, first of all we will obtain the dynamic-view data from the tools before processing this diagrams in order to obtain the preliminary architecture of the system. In the next figure we can see the package dependencies diagram generated with Omondo, after a small work of sorting and filtering non relevant packages (the reasoning of these discards is explained in the next section)

Page 78: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 78 of 141

WP1 Partners Public 29/08/2008

Figure 25. Package dependencies diagram

13.2.2.4 Dynamic-view extraction

In this process we analyze the system in run time, obtaining the collaborations between elements in order to perform the desired tasks. We will also obtain traces for the system that will allow us to measure several quality metrics as performance.

13.2.2.4.1 Use scenarios definition

Before we start this process, we have to define some sequences of interactions between us and the system based on the use case information obtained in the user

Page 79: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 79 of 141

WP1 Partners Public 29/08/2008

manual of the application. These test scenarios should cover the most representative execution interactions of the application, in order to collect the most possible information. Also, they should not overlap, and try to be well defined, in order to ease the following analysis of the obtained information.

With those priorities in mind, we define the following scenarios:

Application initialization: The user opens the application, specifying a work folder, and chooses and displays one of the icons in the Visor window.

Image transformations: With the application running, the user switches the resolution, displays one of the images. Then he applies the following operations to the image: Window/Level, choosing different parameters in the dialog. Ecualizado, obtaining a new image, and Rotacion, modifying the second image without generating a third image. Finally we save the second image in the work folder.

These two scenarios cover most of the functionality described in the user manual, and the resulting sequence diagrams from both scenarios will show the dynamic view of the system.

13.2.2.4.2 Tools analysis

There was only one tool used to obtain a dynamic view of the system: Eclipse's TPTP 4.1 (Test and Performance Tools Platform). This set of Eclipse plug-ins provides a complete framework for developing and running agents for testing and monitoring applications. For the architecture recovery process, we are only interested in a small subset of its functionality, concretely the supplied java profiler. The TPTP agents need an Agent Controller in order to work, and you can either employ an Integrated Agent Controller (IAC), integrated as part of the tools, or you can install a Remote Agent Controller (RAC), working as a separate process from the system. Before obtaining the data we did test the tools in both Windows XP Professional and Linux Kubuntu systems, employing both IAC and RAC and we noticed the performance experienced huge shifts depending on the configuration. The IAC was too slow for profiling this application, and had huge waiting times that made impossible to profile a complete session. The developers recognize this issue and it should be vastly improved in the upcoming 4.2 version of the tools. When employing the RAC Windows performance was much better than its Linux counterpart, in about a 5 to 1 scale, so it was chosen as the profiling environment for this process. It is important to remind at this point that we are concerned about the performance of this application, as it has big memory and response time issues. So, if we had to profile a less demanding application we would probably be able to choose any of the other options.

The java profiling agent can collect run time information from three different environments: method invocation, memory usage and execution time. The testing and profiling view can show this information in statistics table or in execution flow and sequence diagrams. In this process we are mainly interested in the diagrams, as a visualization of the dynamic interactions of the system, and we will also collect run time statistics, in order to detect bottlenecks and performance issues.

The profiling process with the TPTP tools is relatively simple although can be confusing because of the amount of options available. In this test case we chose to collect information from the three different types (Methods, memory and execution time), filtering away the JRE's base classes (using the default profiling filter). In the first scenario we collected the information from the start whereas in the second scenario we started manually the collection when it was totally initiated. With the application running we followed the scenario operations and stopped the application with the desired data collected. TPTP can display the sequence diagram of the

Page 80: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 80 of 141

WP1 Partners Public 29/08/2008

recorded scenario in three different modes. It can show the thread interaction diagram (where the only lifelines are the threads), the actual object diagram (each instantiated object has an own lifeline) and the class diagram (each class has its own lifeline, shared by all the instances). In this case study, we opted for the class diagrams, because the thread view is not informative in this context, with a main thread, an AWT event thread and some sporadic image loading threads. We could also choose the object view, which is the real execution sequence unlike the class view, but it was bigger as multiple instances of a class increased a lot the size of the diagram. The main downside of the object representation is that calls to inherited methods appear in the parents lifeline, whereas there can be no instances of the parent in memory.

The resulting class interactions diagram from the test scenarios is too fine grained, resulting in a huge size (100.000 pages in the first scenario), so we need to do a filtering process in order to obtain a manageable diagram. We can see as a reference the diagram shown in Figure 26, which illustrates the interaction between the classes correspondent to applying the Realce operation to a loaded image, after doing a lot of filtering work. The process was fairly manual, but we did apply several techniques that could be reused in different processes with the same tool. This filtering process is part of the initial processing of the collected information from the tools, which is part of the abstraction process, so we are going to focus here on the TPTP functionality to simplify the diagram. We can filter out the displayed information in the profiling results (this is an additional filter from the collecting filter applied before initiating the process). With this filter we have removed the information concerning inner classes and non source code classes. In addition to general rules filter we can also selectively hide lifelines or method calls from the diagram. For example, we have filtered out the -clInit- methods that appear at the creation of many objects in order to simplify the diagram. We have also filtered out many classes that were not important in the architecture of the system, such as utility classes, but this refining already requires an understanding of the general architecture of the system.

The sequence diagram can also be reduced by collapsing method calls and lifelines. A collapsed method call hides every nested call, being useful to compress repeated sequences (collapsing the repetitions) or showing only the public methods execution, hiding the actual implementation. Collapsing lifelines allows us to combine two or more class method calls in one lifeline. This operation allows us to unify parent and son method calls in the same line, avoiding that an object is represented in the diagram as two different lifelines. We have also collapsed several lifelines which have an equivalent role in the system, such as the different Operaciones (Suavizado, Realce...). We can distinguish each one by looking at the name of the method calls.

After the application of these techniques to the diagram obtained with the first scenario we have obtained a 32 page diagram, where the main interaction sequence of the scenario is much clearer.

Page 81: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 81 of 141

WP1 Partners Public 29/08/2008

Figure 26. Sequence diagram - Applying the Realce transformation to an image

As we mentioned before we have also obtained several execution footprints that will allow us to do an evaluation of the performance of the application. We have defined an additional scenario, labeled 'stress test', where we try to cover the most possible execution interactions resulting from the operations defined in the user manual. The profile data for this scenario will give us an indication of the package and classes actually used in the system, so that when comparing this memory footprint with the full class list points us to possibly unused classes that should be checked and removed if necessary.

Table 7 - TPTP Metrics

Tool used Product obtained #

DIAGRAMS

Page 82: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 82 of 141

WP1 Partners Public 29/08/2008

Tool used Product obtained #

TPTP Sequence diagrams (Class) 2 scenarios

METRICS

TPTP Method invocation data

Number of calls

Percentage of methods used

3 scenarios

TPTP Memory usage data

# of instances of each classified

Amount of memory used

3 scenarios

TPTP Execution time data

Total time spent in each method

Average execution time of each method

3 scenarios

13.2.2.5 Abstraction

In this manual process we refine the information extracted automatically with tools in the former processes in order to obtain a higher level view of the system. We have employed four different techniques in this process that are described in the following sections. The techniques are:

Filter non-relevant elements

Filter unused elements

Detect fundamental classes

Define higher level modules

This analysis is focused in each package independently, considering only the package classes and the dependent classes from the rest of the system. In this process we also integrate the information from the sequence diagrams, which was really useful to understand the class interactions, allow navigation between a typical execution flow. The javadoc's class definitions have been also used to ease the understanding of the classes, although as we saw earlier it must be checked because it could be incorrect.

13.2.2.5.1 Filter non-relevant elements

First of all we exclude several elements from the analysis, as they aren't a fundamental part of the architecture.

The test packages (and test-enabling classes such as TestProxy) are discarded as they aren't part of the architecture. The packages have already been left out of the analysis, but there were some TestProxy classes in the analyzed packages.

We also filter out all the utility packages. These packages are easily identified by their zero efferent coupling and multiple afferent couplings, labeling them as independent elements. The typical utility class is composed of several static methods that provide low level functionality but is bound closely to implementation details, being irrelevant for the global system architecture. With these criteria we have filtered out four utility packages, three packages ending with util or utils, and the telemedicina.recursos, whose only class, TelemedicinaI18N, clearly falls in the definition stated previously.

Page 83: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 83 of 141

WP1 Partners Public 29/08/2008

Finally we filter out all the exception elements (a package and several classes), which have a clear independent function in the system and would only clutter the diagrams.

After this sub-process we have filtered out 7 packages and 36 classes from the analysis.

13.2.2.5.1.1 Filter unused elements

After this step we have 11 packages to analyze, but before we start delving into them we are going to whether there are unused packages or classes, as we may suspect because of Javadoc's inconsistency. The procedure for finding these elements is composed of three steps. First of all we check the data from the stress test with the full class list. In the next step we look for the dependencies from the rest of the code in the class diagrams, and finally we have to check the source code parts where the dependency is found, in order to verify whether the class can be instantiated. The validation has to be done with the source code because we can't assure the stress test is complete and the class diagrams are incomplete and may contain dependencies from unreachable pieces of code.

In the stress test analysis we have found that there are two packages absent from the statistics: the jai.roi and visor.roi packages, (ROI, Region Of Interest). The classes from these packages clearly come from Lawrence Rodrigues book, and they are in charge of drawing a ROI (Region Of Interest) in a loaded image in order to measure some specific values. This functionality is not mentioned in the documentation, neither has been found when interacting with the application. When we check the class diagrams there is only one detected dependency between the packages and the rest of the code, and when we look at the source code we find that it is an unreachable line, so these packages are not used at all in the current version of the tool, and subsequently can be removed from the analysis. There are also several classes missing that can never be reached, but we also detect some elements which could appear in another execution, especially concerning execution instances with errors.

We can't be certain about the reason of these dead classes inside the source code, they could appear because of a requirements change during the development or they could be there because of the developers imported full packages from external sources and did not remove the unused elements.

After this sub-process we have filtered out 2 additional packages and 20 classes from the analysis.

13.2.2.5.1.2 Detect fundamental classes

After the filtering process we have to analyze nine packages. In this sub process we are trying to find the key classes that form the core of the system architecture. This process is totally manual and somewhat subjective, but we follow the following criteria in order to identify these classes:

Elements that map directly to domain model elements,

Elements acting as interfaces with the rest of the system

Control elements

GUI main elements

After this process we get an overall understanding of the system architecture, without the need of analyzing the source code. The sequence diagrams have been really informative in order to obtain this knowledge, although the analysis was focused in

Page 84: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 84 of 141

WP1 Partners Public 29/08/2008

each package. We present now the results of this analysis, expressed as a brief description of each package and its elements.

telemedicina.tratamientoimagenes.inicializador

This package contains the functionality to start the application,

The Inicializador class, given a Carpeta, creates a Sesion, and creates the main program Window (WtratamientoImagenes). LanzadorSesion is a helper class to obtain the Sesion.

telemedicina.negocio

This package contains the Carpeta/Usuario/Sesion/Actividad model, along with several managers (GestorSeguridad, GestorAlmacenamiento, GestorActividad). The actual implementation of these classes is very incomplete; probably the real implementation was moved to another component of the medical system. In fact, if we analyze the system as a standalone imaging application, this type of functionality does not make sense, which probably did cause the abandoning of this part.

The only used concept from this package is the Carpeta class, just as a representation of the starting working folder (which is passed as a parameter).

Figure 27. Main GUI classes

Page 85: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 85 of 141

WP1 Partners Public 29/08/2008

telemedicina.tratamientoimagenes

This class contains the main Swing graphical elements of the application. WTratamientoImagenes is the main window, BarraOperaciones, BarraMenu, BarraOpcionesBasicas form the menu and tool bars, WacercaDe is the about window. The main classes are illustrated in Figure 27.

The window contains two main containers, the Icons Bar and the Visor. Each one of them is managed by a controller (ControladorIconos, ControladorContenedores), defined in its own packages.

The package also contains a manager for executing the Operaciones, GestorAcciones.

These are the basic elements of the package, but it also contains part of the session model defined in the negocio carpeta, with a local session class (SesionLocalTI) which, again, is mostly empty.

telemedicina.tratamientoimagenes.controladoriconos

This package stores the Swing elements that form the container of the work folder icons (BarraIcons).

ControladorIconos is the main class for managing the system icons. other parts of the system don't interact directly with it, instead they interact with two intermediate classes: IcontroladorIconosContenedor and ControladorIconosSistema.

IControladorIconosSistema processes the user input in the Icon bar, invoking the corresponding actions.

IControladorIconosContenedor should be the interface with the ControladorContenedores, but the concrete children if it interacts directly with ControladorIconos.

ScrollIconos and PanelIconos are the GUI containers of the Iconos (snapshots of the images with a border).

The package also contains legacy classes from the Lawrence book that aren't used in any moment (ImageLoader, ImageBuffer).

telemedicina.tratamientoimagenes.controladorcontenedores

This package contains the controller of the component where the images are displayed and a memory manager.

ControladorContenedores is the abstract controller of the component where the images are displayed (Visor). Its concrete implementation is called ControladorVisoresImagenes.

Loading images in memory is managed by the GestorMemoria from the same package.

telemedicina.tratamientoimagenes.visor

This package contains the Swing components that create the image visualization component and the image operations classes.

The upper level component is the Visor, which can hold a number of TelemedicinaCanvas up to the Resolucion parameter. The ControladorVisoresImagenes maintains a list of Visor objects (one for each Resolucion), and the ControladorResolucion manages this and the selected Canvas.

Page 86: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 86 of 141

WP1 Partners Public 29/08/2008

Each canvas object contains a TelemedicinaImage, and displays the latest transformations (Operaciones) applied to the image in a text field.

Every Operacion implemented follows a MVC-like pattern, extracted from Lawrence's book. The design separates the user interface (__GUI class) from the application logic (supplied by the class implementing ___Controller). The Operation's logic should be applied to a InterfazProcesamiento (Interface implemented by TelemedicinaCanvas) although in several operations they pass directly the concrete class.

The Window/Level operation is more complicated as it has a Graphical User Interface (WindowLevelPane) where the user can choose the values, and a WindowlevelProfile to make these values persistent.

This package has several classes not used, one directly from the book (HistogramFrame) and two from two the UI part of two operations which don't require it (BordesGUI, InversaGUI)

telemedicina.tratamientoimagenes.jai.imageio

This package classes load and save images from/to the hard disk. The format specific operations are done

Loading images: The GestorMemoria calls CargadorImagenesJAI (See Lawrence's JAIIMageLoader) returning a TelemedicinaImage. Loading is done in a separate thread with an ImageLoadedEvent when the operation is finished.

Saving images: Choosing the File operation opens a JFileImageSaver, dialog where the user can choose a filename filtered by a FileExtensionFilter. The physical saving operation is executed calling a static method from ImageSaverJAI. The operation is done calling JAI encoder methods.

Page 87: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 87 of 141

WP1 Partners Public 29/08/2008

Figure 28. Image loading

telemedicina.tratamientoimagenes.codec

This package contains the classes needed to physically load DICOM images. It is adapted from Sun's code in the jai-demos project, using a library (dicom.jar) for the actual file operations. Images are modelled as TelemedicinaImage, which are obtained from a DicomImage2 via the DicomDecoder.

The class DicomImage is a legacy from the original jai-demos code.

telemedicina.tratamientoimagenes.jai.render

This package is imported from the book, and lacks a clear function. Only two of the five classes are used, the parent of TelemedicinaCanvas and a UI for scrolling an image (ScrollGUI).

13.2.2.5.1.3 Preliminary architecture

The following diagram displays the complete architecture of the system after these processes, which has been simplified from 211 elements to 26, in order to provide a higher level view of the system. The diagram only shows one of the Operaciones (Realce), which is enough to illustrate the GUI – Controller – Processing interface pattern followed for this functionality of the system. This diagram displays the most relevant classes for the architecture, but the displayed classes can be directly mapped to source code elements, simplifying greatly the process of zooming in in order to

Page 88: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 88 of 141

WP1 Partners Public 29/08/2008

achieve a better understanding about how one specific function of the system is executed.

Figure 29. Preliminary architecture

13.2.2.5.1.4 Define higher level modules

The abstraction process has two objectives: reduce the complexity of the preliminary architecture, increasing the abstraction level, and possibly filter the architecture to the topic of interest. In this case of study we are interested in planning a future evolution of the system, in order to improve its usability, performance, and reutilization. In order to ease this future process we are going to define several higher level modules, according to functional and reutilization criteria.

Before we define these modules we should look at the existing package division. There are several packages suffering lack of cohesion, probably caused because of the importing of external sources with some modifications.

With the mentioned criteria we have defined five main functional modules in the application: Image storage, image loading, image model & transformations and graphical user interface. For each module, we define now its assigned functions, the related packages and the used APIs.

Page 89: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 89 of 141

WP1 Partners Public 29/08/2008

13.2.2.5.2 Graphical User Interface

This component defines the graphical elements of the user interface (windows, dialogs, buttons, action bars) and captures and processes the user input. If it is necessary it sends the processing orders to the model & transformations component. The Visor and Iconos elements can be treated as submodules, because of their complexity.

Related packages: render, controladoriconos, controladorcontenedores, tratamientoimagenes (Main Window), visor

Related APIs: Swing

13.2.2.5.2.1 Image storage

This module defines how the images are stored and the user permissions in order to access to them. In the case of study there is clearly a designed architecture for this task but the implementation is almost blank. The only implemented part is the work folder element. The functionality looks like it has moved to another part of the business framework.

This module does not make sense if we analyze the application independently, but would become necessary in a complete medical system context. In fact, the application has an alternative to the work folder, with a standard load image identical to any image viewing application.

Related packages: negocio, tratamientoimagenes (SesionTI...)

13.2.2.5.2.2 Image loading & saving

Converts the files from the storage system into objects which can be processed by the remaining parts of the application. It performs the low level format converting operations.

Related packages: imageio, codec

Related APIs: JAI, DICOM.jar library

13.2.2.5.2.3 Image model & transformations

This component defines the objects corresponding to an image in memory and defines the interface for applying transformations to it. The image objects are displayed in the Canvas elements of the GUI, and could be also defined in that module.

Related packages: tratamientoimagenes (GestorAcciones), visor (Operaciones)

Related APIs: JAI, bridge to Swing

Page 90: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 90 of 141

WP1 Partners Public 29/08/2008

Figure 30. Modularization of refined architecture

13.2.3 Success Indicators

An overview of the results of the recovery process is presented in Table 8. We obtained a complete overview of the system, with both static and dynamic views. The diagrams were also refined over the abstraction, discarding almost 90% of the elements. This produced a clear view of the system architecture, containing only the key architecture elements.

Table 8 - Recovery results

System statistics Lines of code 9973 Number of classes 211 Number of packages 17 Extraction phase Class-level diagrams 18 Package-level diagrams 1 Sequence diagrams of scenarios 3 Abstraction phase Filter non-relevant (removed classes) 36 Filter non-used (removed classes) 20 Hierarchization (removed classes) 119 Result of the abstraction (in %) 88%

Page 91: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 91 of 141

WP1 Partners Public 29/08/2008

13.3 Conclusion

13.3.1 Summary

This article has presented a process for recovering the architecture of existing software systems, in order to enable their evolution. It is designed for being applied to small to medium sized software systems. This process has been applied to the architecture recovery of a medical imaging system used in Spanish hospitals. The case study description provides valuable contributions for practitioners, who will find useful guidelines and recommendations.

Applying the QAR process to the system has allowed us to quickly recover the system architecture along several metrics. This information can allow us to evaluate the system architecture, and guide a future evolution of the system focusing on several quality aspects.

Regarding the tools the performance of Jude and TPTP in Windows was acceptable, although the subject of the analysis was a relatively small system, so these measures are probably not significant. On the other side, it would be desirable to improve the dependency analysis of the reverse engineering tools, so that it catches every relationship and also ignores the ones from unreachable parts of the code, although this is probably very complicated to do. Another room from improvement is the default layout of the diagrams, which forces a manual rearranging wasting some time. The filtering tasks could also be automated, as well as some guidance in the hierarchization of elements, so there is a lot of room for improvement, although these processes will still need human validation, but there is still some raw work that can be done.

On the positive side the process has quickly detected inconsistencies from Javadoc, as well as multiple sources from the code, which has proven to be valuable information for the remaining part of the process.

13.3.2 Lessons Learned

One of the goals of the case study was analyzing the feasibility of using general purpose software tools to enforce the recovery process. We have obtained a positive result by using tools aligned with the programming language the system is built upon. Tools such as Eclipse TPTP, Omondo UML and Jude have proven to be sufficient for reconstruction processes. Furthermore, we believe the seamless integration of these tools with a regular development environment and their capability to obtain immediate visualization are highly valuable features. However, the tools have presented several limitations that should be faced to improve the productivity of architecture recovery processes. Dynamic analysis tools do not export gathered data to a common format for data exchange, which blocks automatic synchronization. In addition, both tools lack the possibility of manipulating programmatically the gathered data, which could be needed when working with large systems. It would be interesting in the future to communicate with the developers of these tools (especially in TPTP, which is an open source project) and provide this information about the additional requirements for improving the suitability of their tools for recovery processes. The wide adoption and usability of the chosen tools can be a big factor to popularize the recovery /reconstruction methods in industry.

13.3.3 Final Recommendations

On a final note, it has been commented from the beginning that the process is suited to small and medium systems. However, the actual scalability of this approach has not been thoroughly tested, and should be checked with more case studies of different

Page 92: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 92 of 141

WP1 Partners Public 29/08/2008

systems, possibly adopting some more advanced abstraction semi automated techniques for larger systems.

Page 93: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 93 of 141

WP1 Partners Public 29/08/2008

14 Evolution of a Legacy System Towards SOA

14.1 Problem Statement

14.1.1 Domain

This case study will be executed on a medical imaging product, based on Java, supplied by Ibermática. It is currently in use in several Spanish hospitals. The system allows doctors to visualize high resolution medical images and manipulate them applying several transformations.

The objective of the case study is to evolve the legacy system for improving its quality. The evolution will address the main concerns of both users (gathered from surveys) and developers with the system. The main areas of improvement are: user experience (usability and performance), system maintainability and interoperability with other medical systems. These requirements plus the documentation obtained from the previous case study will be converted into an evolution plan which will guide the process. Currently the case study is composed of the following stages, although the list may experience some changes during the case study:

Platform migration towards SOA (Service Oriented Architecture): The legacy system will be refactored in the architecture level to a SOA model. For this concrete case we have chosen the OSGi (Open Service Gateway Initiative) Service Platform as our component model of choice, with the Equinox implementation as the base technology. The system will be refactored into a set of dynamic, loosely coupled services (OSGi-services and bundles).

Replacement of the User Interface (UI): In order to improve the quality of the user experience the UI of the product will be replaced by a substitute, which should prove to be more extensible, customizable and attractive. The chosen model for the new GUI (Graphic UI) is the RCP (Rich Client Platform) model.

Add connectivity functionality with remote imaging servers via WADO (Web Access to DICOM Objects)

14.1.2 Current Situation

This case study is the continuation to ―Architecture recovery of a legacy imaging system‖. The imaging system design has been evolved to adopt the principles of SOA (Service-Oriented Architectures).

14.1.3 Goals and Expected benefits

Loose coupling is one of the primary benefits of service-oriented systems. SOA allow breaking down the system into independent modules (i.e. the services) that interact with each other by means of well defined interfaces, Although the evolution process we present here is flexible enough to be applied with other objectives in mind we believe that it is especially well suited for evolving a system to SOA. Furthermore, it is our recommendation to follow SOA design principles to improve software maintainability and to smooth the path for future evolution cycles.

Page 94: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 94 of 141

WP1 Partners Public 29/08/2008

14.2 Solution

14.2.1 Approach

In this case study we propose four steps for the migration to the SOA architecture, as follows:

Architecture selection.

Definition of the steps.

Planning of the steps.

Evolution Execution.

The four phases of this process produced a detailed evolution plan for the system which accomplishes the goals of the evolution process. The plan was also adapted to the singularities of the system, reflected in the recovered architecture.

14.2.1.1 Architecture selection

We were looking for a new architecture which supports an increased degree of maintainability and interoperability for the system. These are general good design principles, which are explicitly supported by Service-Oriented Architectures [30]. In the Java context, the OSGi [31] specification provides a functional framework based on these principles.

The OSGi service platform is a specification developed by the OSGi Alliance, which defines a framework for service execution, plus some basic services and facilities for service lifecycle management, including a registry of services and locally available service implementations.

Figure 31. OSGi architecture

Page 95: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 95 of 141

WP1 Partners Public 29/08/2008

These characteristics have lead us to select OSGi as the enabling technology for our future architecture, which will be composed of a set of loosely coupled dynamic components integrating seamlessly via services. Thus, moving the system architecture to the OSGi service platform is a requisite to evolve the system into a full-fledged SOA.

The selection of this service execution framework and the evolution of the medical imaging system towards its usage would provide the following advantages:

Improved maintainability of the system and the architecture, as being loosely coupled services; the interrelations are governed by the services registry. Relations can be changed at runtime and services can be changed to interact remotely in a transparent manner.

Improved configurability of the system, as services can be combined in different ways and lead to different runtime configurations.

Improved substitutability of parts of the system, as the interactions between services allow for the replacement of service implementations. In some cases, complete subsystems can be replaced by new libraries or third party provided services.

Improved service management capabilities, as the definition of isolated services on top of the OSGi service platform allows for the (remote) management of each of them. This opens a wide range of possibilities, where the company providing the software could face a change of business model and provide (parts of) the service instead.

14.2.1.2 Definition of the steps

With our objectives in mind we defined the required evolution cycles to achieve the desired targets. This phase produced these four cycles, based on the goals and the future architecture:

Migrate the product to the selected architecture, the OSGi Service Platform. In this case study we have chosen Eclipse Equinox (OSGi R4 implementation).

Refactor the product as a set of components interacting through services.

Develop a substitute user interface for the system, based on RCP (Rich Client Platform). RCP is a framework built over Eclipse which allows rapid development of client applications, allowing for high customizability and integration with different tools [32]. The successful Eclipse IDE itself has been built using this model. It also should improve the reliability because of the use of SWT, a low-level graphic widget toolkit substituting the standard Java Swing, which is slower as it is emulated over the virtual machine instead of invoking system calls.

Open up the system to interoperability with PACS (Picture Archiving and Communication Systems). The current version of the product works with locally-stored images, while the new SOA migrated system would be able to interact with remote image servers, and also be able to be included into a medical workflow by means of the publication of the service interface using Web Services.

14.2.1.3 Planning of the steps

In this step we planned a workflow for the execution of the cycles. The first step has to be the migration of the architecture to the OSGi service model, which is mandatory before any of the other steps are executed. The second step must also be the refactoring of the legacy components into OSGi components and services, which will

Page 96: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 96 of 141

WP1 Partners Public 29/08/2008

greatly simplify the other two cycles. The final two cycles can be executed in any order, or even be executed in parallel, thanks to the refactoring done in the second step.

14.2.1.4 Feasibility check of the steps.

Some quick tests of the interoperability of the technologies involved were made in order to make a preliminary validation of the defined process. To do that, we have performed unit-testing on the OSGi components (bundles) using JUnit. These tests did not expose any problems, and the experience obtained with this effort helped in the evolution execution stage.

14.2.1.5 Evolution Execution

This step consists of refactoring the legacy product into a set of loosely coupled services, deployed as bundles. The documentation obtained from the architecture recovery process is really helpful here in order to separate the functionality of the legacy code, easing the refactoring. The identified modules in the recovery are: graphical user interface (the most part of the application), the image transformation component, tied to the JAI library, and the image access/storage functionality. For each functional module we defined a generic service, with a specific implementation in the existing code base. This allows the task of extending the application by substituting one service implementation for another (i.e. local folder access for remote server access).

For decoupling the system components we have chosen the whiteboard pattern [33]. This pattern is an example of inversion of control (IoC), that is: ―do not call us, we will call you‖. The service providers register listeners in the OSGi Service Registry. When a consumer needs that service, looks for it in the Service Registry and binds it (see Figure 32). This way presentation is completely decoupled from underlying logic.

The last step performed has been the duplication of GUI by offering two possibilities for the underlying platform: Swing and RCP. The functionality of the system has not been changed, but the usage characteristics do, as each of the libraries provides different look and feel and performance figures. By using the whiteboard pattern in runtime –which is allowed by the OSGi platform- the selection of GUI service implementation can be changed during execution, so the goal of adaptation to the user has also been met.

Figure 32. Whiteboard Actors in the OSGi Framework

14.2.2 Major Results

Finally we obtain the new version of the imaging system. This new version is OSGi-based. In this system, the business logic is completely decoupled from the GUI. Because of that, we can implemented a new GUI based on RCP and connect it with the application logic.

Page 97: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 97 of 141

WP1 Partners Public 29/08/2008

Figure 33. OSGi-based system architecture

14.2.3 Success Indicators

When we finished the refactoring to SOA, we had two different versions of the same system: one pure Java and other OSGi-based (SOA). We are going to compare the maintainability expected in both systems by applying our prediction model.

We use internal metrics to predict the maintainability, according to the following process:

Selecting metrics. It is important to select a set of metrics for each one of the internal quality indicators (complexity, coupling, cohesion, and so on).

Selecting tools. For each metric, we should have the appropriate tool for taking the value.

Calculating thresholds. This is a critical and difficult point. The world of metrics is very heterogonous. In this step we try to estimate the optimum range of operation for each metric selected.

Measurement. Taking the selected metrics with tools.

Visualization. We have to choose a visualization technique for helping us to better understand the evaluation of the results.

Validation. With the collected information, in this step it‘s time to evaluate and interpret the results.

Figure 34. Method to evaluate maintainability

The choice and interpretation of metrics is full of problems [103]. There are a lot of metrics in the literature. Since the perfect software metric does not exist, we have selected a set of metrics, those with more impact:

Software Complexity Metrics by McCabe, 1976 [104].

Software Complexity Metrics by Halstead, 1977 [105].

Page 98: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 98 of 141

WP1 Partners Public 29/08/2008

Object-Oriented Metrics by Chidamber and Kemerer, 1994 [106].

Object-Oriented Design Quality Metrics by Martin, 1994 [107].

Object-Oriented Design by Abreu, 1994 [108].

Cohesion and Reuse in an Object-Oriented System, James M. Bieman and Byung-Kyoo Kang, 1995 [109].

Object-Oriented Metrics by Henderson-Sellers, 1996 [110].

A Unified Framework for Cohesion Measurement in Object-Oriented Systems, by Briand, Daly, and Wüst [111].

Our set of metric is the following:

Complexity: MCC (Cyclomatic Complexity), HD (Program Difficulty), RFC (Response for a Class).

Coupling: ECC (Efferent Coupling), ACC (Afferent Coupling), CBO (Coupling between Object Classes).

Cohesion: LCOM (Lack of Cohesion in Methods), LCOM5 (Lack of Cohesion in Method 5), COH (variation of LCOM5).

Change Impact: WMV (Weighted Methods per Class).

Test Coverage: COV (Code Accessed by Tests).

Code Quality: CR (Comments Ratio), CON (Naming Conventions).

Encapsulation: DMS (Distance from the Main Sequence).

Reuse: DIT (Depth of Inheritance Tree), NOC (Number of Children).

In next tables we summarize the metrics results (mean values). The first table represents the original application. After that, we have collected the metrics in the application, that is, SOA-based application.

Metrics Value VG 10 HD 12

RFC 22,28 EC 3,944 AC 10,389

CBO 2,3 LCOM 0,419 LCOM5 59,2 COH 49,01

WMV 16,486 COV 0 CR 20,4 CON 8961

DMS 0,416 DIT 2,76 NOC 0,212

Table 9 - Metrics in pure-Java application

Metrics Value VG 1,563 HD 8,97

RFC 19,48 EC 2,917 AC 4,875

CBO 1,5 LCOM 0,345 LCOM5 65,98 COH 42,31

WMV 13,75 COV 0 CR 15,5 CON 0

DMS 0,573 DIT 2,029 NOC 0,212

Table 10 - Metrics in SOA-Java application

We use radial diagram for the visualization. We need normalize the results above and this way we can obtain the radial diagrams:

Page 99: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 99 of 141

WP1 Partners Public 29/08/2008

Figure 35�. Maintainability in pure-Java application

Figure 36�. Maintainability in SOA-Java application

Comparing both diagrams we can make some conclusions. The first is the SOA-Java application is better (in terms of maintainability) than the original one. The area than represents maintainability is bigger in the SOA-Java.

The new system has improved in almost all quality features: complexity, coupling, change impact, test coverage, code quality, encapsulation and reuse. There are not any changes in test coverage since there is zero test coverage in both applications (pure-Java nor SOA-Java). The cohesion continues being poor in new system. This point should be review in a feature version of the application.

To validate these results, we use the MI metric. This metrics have the following values:

MI(pure-Java) = 82,728

MI(SOA-Java) = 89,385

According to thresholds, the first application is the range of fair maintainability (65 =< MI < 85). SOA-Java application have improved its maintainability, and now is the range of excellent maintainability (MI >= 85).

14.3 Conclusion

14.3.1 Summary

Once the architecture of the system was recovered, we had recommended applying a series of refactoring iterations to evolve the system. In addition to the process description it is interesting to point out that the selected architecture for the case study, the OSGi Service Platform, has proven to be a natural candidate for evolving legacy Java applications to service-oriented systems. The OSGi platform leverages extensibility mechanisms over a lightweight core and provides seamless interoperability with other SOA technologies such as Web Services.

14.3.2 Lessons Learned

We obtained the following conclusions:

Page 100: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 100 of 141

WP1 Partners Public 29/08/2008

OSGi is the natural SOA approach for Java applications. It is quite mature and it has industrial support.

SOA improves maintainability because the coupling is lower.

Using the whiteboard pattern allows disconnecting application components using the OSGi Service Registry.

14.3.3 Final Recommendations

Service-Oriented Architectures provides an opportunity for organizations to reduce the costs and complexities of application integration and open up new possibilities for legacy applications. In addition, SOA improves the quality of the applications in terms of maintainability (mainly in coupling).

Page 101: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 101 of 141

WP1 Partners Public 29/08/2008

15 Refactoring JEE application to Spring Framework

15.1 Problem Statement

15.1.1 Domain

This case study is focused on java enterprise applications. The enterprise software is used in companies for solving business issues. The enterprise applications usually offer services to the company users and/or customers. An application server delivers the enterprise applications to the client computers, typically thought the Internet and using HTTP. Browsers are used in the client-side for accessing to the services provided by the enterprise applications through applications servers. The typical scenario for a java enterprise system (three-tier based) is the following:

Client Tier: Browsers that request services for servers.

Application Server Tier: Java based servers. In this server lives the enterprise application, formed by Java components. The server is in charge of responding to client‘s request. In addition, it connects to external systems, such as databases and so on.

Enterprise Information Server (EIS): Databases and legacy systems.

Figure 37. Typical Java Enterprise System

Java Enterprise Edition (Java EE or JEE) [34] is the platform created by Sun Microsystems for developing this kind of applications using the Java language in the server-side. JEE has been the industrial de-facto standard for years. JEE specifications have been very widely adopted.

Nowadays, there are real alternatives to JEE in the domain of enterprise development. This is the case of the Spring Framework [35]. Spring is a full-stack Java/JEE application framework led and sustained by SpringSource (formerly Interface21) [36].

In this case study we analyze the both frameworks. Then we will look for the way for refactoring JEE applications to Spring Framework.

Page 102: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 102 of 141

WP1 Partners Public 29/08/2008

15.1.2 Current Situation

This section describes the current situation in java enterprise development. In other word, it is a state-of-art study. We focused in the following versions: JEE 5 [38] and Spring Framework 2.5 [39].

15.1.2.1 Java Enterprise Edition

Java Enterprise Edition (JEE or Java EE), also known as Java Platform, is a platform for server programming using Java language. Formerly this platform was known as Java 2 Enterprise Edition (J2EE). This name was changed to Java EE in version 5. The current version is called Java EE 5 [34].

JEE has been created by Sun Microsystems. Sun defines three platforms based on language Java covering different application environments:

Java SE (Standard Edition): For general purpose use on desktop PCs, servers and similar devices.

Java EE (Enterprise Edition): Java SE plus various APIs useful for multi-tier client-server enterprise applications.

Java ME (Micro Edition): Specifies several different sets of libraries (known as profiles) for devices which are sufficiently limited that supplying the full set of Java libraries would take up unacceptably large amounts of storage.

The Java EE 5 platform provides a set of APIs for developing enterprise applications reducing development time, complexity, and improves performance. The Java EE 5 platform has a simplified programming model, since the configuration is possible with XML deployment descriptors or with annotations into a Java source files.

The Java EE platform uses a distributed multi-tiered application model for enterprise applications (see Figure 38). Application logic is divided into components according to function, and the various application components make up a Java EE application, because the application is make up with modular components that could be distributed in others sites.

Page 103: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 103 of 141

WP1 Partners Public 29/08/2008

Figure 38. Multi-tiered Applications in JEE 5

Java EE components are written in the Java programming language and the difference between Java EE components and ―standard‖ Java classes is that Java EE components are assembled into a Java EE application, are verified to be well formed and in compliance with the Java EE specification, and are deployed to production, where they are run and managed by the Java EE server.

The different APIs provide by Java EE 5 grouped following the multi-tiered approach is the following:

Figure 39. API's in Java EE 5

15.1.2.1.1 Web Tier

Page 104: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 104 of 141

WP1 Partners Public 29/08/2008

Java EE 5 defines two types of web applications:

Presentation-oriented: Application with interactive web pages containing various types of markup language (HTML, XML, and so on) and dynamic content in response to requests.

Service-oriented: A service-oriented web application implements the endpoint of a web service. Presentation-oriented applications are often clients of service-oriented web applications.

The first kind of application is explained in this section (15.1.2.1.1). Service-oriented application is related in section 15.1.2.1.2 (Web Services).

The main technologies involved in this tier are: Java Servlet, JSP, JSTL and JavaServer Faces. Their relationships are illustrated in the following diagram:

Figure 40. Java Web Application Technologies

This tier cover the components used in developing the presentation layer of a Java EE 5 or stand-alone web application:

Java Servlet [86]. Servlets are Java classes that dynamically process HTTP requests and construct HTTP responses.

JavaServer Pages (JSP) [87]. JSP pages are text-based documents that execute as servlets but allow a more natural approach to creating static content. A JSP page is a text document that contains two types of text: static data, which can be expressed in any text-based format (commonly HTML), and JSP elements, which construct dynamic content. Besides, JSP has some utilities for the development like JSTL, which provides new tags, and JavaServer Faces. The main features of JSP technology are:

o A language for developing JSP pages, which are text-based documents that describe how to process a request and construct a response

o An expression language for accessing server-side objects

o Mechanisms for defining extensions to the JSP language

JavaServer Pages Standard Tag Library (JSTL) [88]. JSTL encapsulates core functionality common to many JSP applications, to avoid repeating scriptlets. It has iterator and conditional tags for handling flow control, tags for accessing databases using SQL, internationalization (I18N) and commonly used functions.

JavaServer Faces (JSF) [77]. JSF is a web application framework intended to simplify development of user interfaces for Java enterprise applications. It provides a rich architecture for managing component state, processing component data, validating user input, and handling events. JSF uses a component-based approach. The state of UI components is saved when the client requests a new page and then is restored when the request is returned. One of the greatest advantages of JavaServer Faces technology is that it offers a clean separation

Page 105: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 105 of 141

WP1 Partners Public 29/08/2008

between behavior and presentation. A typical JSF application includes the following pieces:

o A set of JSP pages (or other view presentation technology).

o A set of backing beans, which are JavaBeans components that define properties and functions for UI components on a page.

o An application configuration resource file, which defines page navigation rules and configures beans and other custom objects, such as custom components.

o A deployment descriptor (web.xml file).

o Possibly a set of custom objects created by the application developer. These objects might include custom components, validators, converters, or listeners.

o A set of custom tags for representing custom objects on the page

The Web Modules are packaged into WAR files, that have a structure like is shown in Figure 41. The directory, called 'classes', contains server-side classes: servlets, utility classes, and JavaBeans components while that the JSP pages is into the root.

Figure 41. Web Module Structure

15.1.2.1.2 Web Services

This part of JEE 5 cover the APIs used in developing standard web services:

Java API for XML-based Web Services (JAX-WS): JAX-WS is the successor of JAX-RPC, allowing access to WSDL/SOAP-based web services. It provides support for web services that use the JAXB API for binding XML data to Java

Page 106: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 106 of 141

WP1 Partners Public 29/08/2008

objects. The Web Services for J2EE specification describes the deployment of JAX-WS-based services and clients. The JAX-WS specification describes the support for message handlers that can process message requests and responses.

Java API for XML Binding (JAXB): The Java Architecture for XML Binding (JAXB) provides a convenient way to bind an XML schema to a representation in Java language programs. JAXB can be used independently or in combination with JAX-WS, where it provides a standard data binding for web service messages.

Java API for XML Registries (JAXR) provides access business and general-purpose registries over the web (ebXML Registry, UDDI).

SOAP with Attachments API for Java (SAAJ): is a low-level API on which JAX-WS and JAXR depend. SAAJ enables the production and consumption of messages that conform to the SOAP 1.1 specification and SOAP with Attachments note.

The Streaming API for XML (StAX) provides a standard, bidirectional pull parser interface for streaming XML processing. StAX is the latest API in the JAXP (Java API for XML Processing) family, offering a programming model simpler than SAX (Simple API for XML) and more efficient in memory than DOM (Document Object Model).

The schema of the web services API could be seen in the next diagram:

Figure 42. Web Services APIs in JEE 5

15.1.2.1.3 Enterprise Beans

The Enterprise JavaBeans (EJB) technology provides components used in developing the business logic of a Java EE 5 application. The business logic is the code that fulfils the purpose of the application. Enterprise beans run in the EJB container, a runtime environment within the Application Server. Enterprise beans simplify the development for three reasons: the EJB container provides system-level services to enterprise beans, the beans (and not the clients) contain the application‘s business logic and enterprise beans are portable components.

There are two types of Enterprise beans:

Session beans: They perform a task for a client; optionally may implement a web service. A session bean represents a single client inside the Application Server. To access an application that is deployed on the server, the client invokes the session bean‘s methods. The session bean performs work for its client, shielding the client from complexity by executing business tasks inside the server. The session can save the state of the client session (Stateful Session Beans) or not (Stateless

Page 107: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 107 of 141

WP1 Partners Public 29/08/2008

Session Beans). The stateless session beans can support multiple clients, they can offer better scalability for applications that require large numbers of clients. The clients access session beans through interfaces.

Message-Driven beans: They act as listeners for a particular messaging type, such as the Java Message Service API. This allows Java EE applications to process messages asynchronously. It normally acts as a message listener. The messages can be sent by any Java EE component or by a JMS application or system that does not use Java EE technology. Message-driven beans can process JMS messages or other kinds of messages. In this case the clients don't access through interfaces.

15.1.2.1.4 Persistence

This section covers the persistence in Java EE. This layer is used for accessing databases from Java EE applications.

The JavaDatabase Connectivity8 (JDBC) provides a standard way to invoke SQL commands from Java programming language methods.

Java Persistence API (JPA): JPA is a key piece in JEE 5 persistence. It provides an object/relational mapping (ORM) for managing relational data in enterprise beans, web components, and application clients. Persistence uses an object-relational mapping approach to bridge the gap between an object oriented model and a relational database.

Persistence in the Web Tier: Data that is shared between web components is usually maintained in a database. The steps for accessing to those data using JPA in web applications are the following:

o Defining the Persistence Unit. A persistence unit is defined by a persistence.xml file, which is packaged with the application WAR file.

o Creating an Entity Class:

o Add the @Entity annotation to the class.

o Add the @Id annotation (primary key of the table).

o Add the @Table annotation (name of the database table)

o Optionally make the class Serializable.

o Obtaining Access to an Entity Manager. JPA allows developers to use annotations (@PersistenceUnit) to identify a resource so that the container can transparently inject it into an object.

o Accessing Data from the Database

o Updating Data in the Database

The Java Persistence Query Language (JPQL). The Java Persistence query language defines queries for entities and their persistent state.

15.1.2.1.5 Services

This section covers the system services used by all the Java EE 5 component technologies:

Security:

8 Actually JDBC is part of Java SE since version 4.

Page 108: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 108 of 141

WP1 Partners Public 29/08/2008

o Securing Enterprise Beans. The protection in EJB can be done by the following:

Accessing an Enterprise Bean Caller's Security Context

Declaring Security Role Names Referenced from Enterprise Bean Code

Defining a Security View of Enterprise Beans

Using Enterprise Bean Security Annotations

Using Enterprise Bean Security Deployment Descriptor Elements

Configuring IOR Security

Deploying Secure Enterprise Beans

o Securing Application Clients

Using login modules. An application client can use the Java Authentication and Authorization Service (JAAS) to create login modules for authentication.

Using programmatic login. Programmatic login enables the client code to supply user credentials.

o Securing EIS Applications. In EIS applications, components request a connection to an EIS resource (and maybe can require a sign-on for the requester to access the resource). The application component provider has two choices for the design of the EIS sign-on:

In the container-managed sign-on approach, the application component lets the container take the responsibility of configuring and managing the EIS sign-on.

In the component-managed sign-on approach, the application component code manages EIS sign-on by including code that performs the sign-on process to an EIS.

o Securing Web Applications. Java EE security services can be implemented for web applications in the following ways:

Annotations.

Declarative security expresses an application‘s security structure, including security roles, access control, and authentication requirements in a deployment descriptor.

Programmatic security is embedded in an application and is used to make security decisions.

o Java Authorization Contract for Containers (JACC) defines security contracts between the Application Server and authorization policy modules. These contracts specify how the authorization providers are installed, configured, and used in access decisions.

Remoting: Java Message Service (JMS). It is a remoting standard that allows Java EE application components to create, send, receive, and read messages. It enables distributed communication that is loosely coupled, reliable, and asynchronous [61].

Transactions:

Page 109: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 109 of 141

WP1 Partners Public 29/08/2008

o Java Transaction API (JTA) provides a standard interface for demarcating transactions across multiple XA resources [69].

o Java Transaction Service (JTS) specifies the implementation of a transaction manager that supports JTA (JSR 907) [90].

Resource connections. In a distributed application, components need to access other components (e.g. databases). The Java Naming and Directory Interface (JNDI) naming service enable components to locate other components and resources.

The Java EE Connector Architecture (JCA). JCA enables Java EE components to interact with enterprise information systems (EISs). JCA simplifies the integration of diverse EISs [64].

Mails: JavaMail is a Java API used to receive and send email via SMTP, POP3 and IMAP. JavaMail uses the JavaBeans Activation Framework (JAF). JAF provides standard services to determine the type of an arbitrary piece of data, encapsulate access to it, discover the operations available on it, and create the appropriate JavaBeans component to perform those operations.

Management: Java Management Extensions9 (JMX) [62]. It is a Java technology that supplies tools for managing and monitoring applications.

15.1.2.2 Spring Framework

Spring is a layered Java/J2EE application framework, based on code published in Expert One-on-One J2EE Design and Development by Rod Johnson [40]. Spring was created to address the complexity of enterprise application development.

In broad strokes, the Spring Framework includes:

A lightweight container, providing centralized, automated configuration and wiring of your application objects. The container is non-invasive, capable of assembling a complex system from a set of loosely-coupled components (POJOs, Plain Old Java Objects).

A common abstraction layer for transaction management, allowing for pluggable transaction managers, and making it easy to demarcate transactions without dealing with low-level issues.

A JDBC abstraction layer that offers a meaningful exception hierarchy (no more pulling vendor codes out of SQLException), simplifies error handling.

Spring provides foster integration to with existing solutions like Toplink, Hibernate, JDO, and iBATIS SQL Maps.

AOP (Aspect-Oriented Programming) functionality, fully integrated into Spring configuration management.

A flexible MVC web application framework, built on core Spring functionality. Spring middle tier can easily be combined with a web tier based on any other web MVC framework, like Struts, WebWork, or Tapestry.

The Spring's functionality can be used in any Java application server (JEE compliant or not). The Spring Framework is also the base platform for several sister Spring projects [42]:

9 Early adopted by the J2EE, JMX has been a part of Java SE since version 5.0.

Page 110: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 110 of 141

WP1 Partners Public 29/08/2008

Spring Web Flow [43]. It is an application controller framework that allows developers to model user actions as high-level modules called flows.

Spring Web Services (Spring-WS) [44]. It aims to facilitate contract-first SOAP service development with Spring Framework.

Spring Security (Acegi Security) [45]. It is a security solution for enterprise software, with a particular emphasis on applications that use Spring.

Spring Dynamic Modules for OSGi Service Platforms [46]. This project makes possible to build Spring applications that run in an OSGi framework.

Spring Batch [47]. It is a batch framework designed to enable the development of robust batch applications vital for the daily operations of enterprise systems.

Spring Integration [48]. It provides an extension of the Spring programming model to support the Enterprise Integration Patterns [56].

Spring LDAP [49]. Java library for simplifying LDAP operations, based on the pattern of Spring's JdbcTemplate.

Spring IDE [50]. Spring IDE is a graphical user interface for the configuration files used by the Spring Framework. It's built as a set of plugins for the Eclipse platform.

Spring Modules [51]. Collection of tools, add-ons and modules to extend the Spring Framework. It provides integration with Ant, Flux, HiveMind, Lucene, Apache OJB, O/R Broker, OSWorkflow, Tapestry, caching (EHCache, JCS, OSCache, GigaSpaces and others), db4o, JSR-94 Rules Engines (Drools and Jess), among others.

Spring JavaConfig [52]. This project is an experiment in producing a Java-based alternative to configuring Spring Application Contexts. Instead of using XML for configuration, JavaConfig provides others ways for configuration: Groovy and properties files.

Spring Rich Client [53]. This project provides a solution for developers that need a platform for constructing high-quality Swing applications quickly.

Spring .NET [54]. Spring.NET is an open source application framework that makes building enterprise .NET applications easier.

Spring BeanDoc [55]. Tool that facilitates documentation and graphing of Spring bean factories and application context files.

Pitchfork [89]. Pitchfork is an open source project (Apache license) developed by SpringSource and BEA Systems.The aims of the Pitchfork project are twofold:

o To provide a basis for implementation of these new features in Java EE 5.0 in existing application servers based on Spring.

o To support Java EE 5.0 annotations inside the Spring container.

All Spring projects are licensed under the terms of the Apache License, Version 2.0 [41]. In this study, we focused in Spring Framework version 2.5 [39]. The table of content for this analysis will be the following:

Architecture. This subsection shows the big picture (main components) in Spring Framework.

Spring Core. This subsection covers the technologic principles in Spring Framework.

Page 111: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 111 of 141

WP1 Partners Public 29/08/2008

Middle Tier Data Access. This subsection covers the data access tier.

Web Tier. This subsection covers the presentation tier.

Integration. This part or the document is a summary of the integration with JEE (or related) technologies.

15.1.2.2.1 Architecture

The Spring Framework is made up of several well-defined modules (Figure 43) [37]. This architecture offers everything that developers need to build enterprise-ready applications.

Figure 43 - Overview of the Spring Framework

Spring's modules are built on top of the core container. This container defines how beans are created, configured, and managed. The modules above the container provide the frameworks with which you will build your application's services, such as AOP and persistence.

The Core container. Spring's core container provides the fundamental functionality of the Spring Framework. This module contains the BeanFactory, which is the

fundamental Spring container and the basis on which Spring's Dependency Injection (DI) and Inversion of Control (IoC) is based (see 15.1.2.2.2). This tier contains the application context module. Spring's application context builds on the core container. The core module's BeanFactory makes Spring a container, but the context module is what makes it a framework. This module extends the concept of BeanFactory, adding support for internationalization (I18N) messages, application lifecycle events, and validation. In addition, this module supplies many enterprise services such as email, JNDI access, EJB integration, remoting, and scheduling. Also included is support for integration with frameworks such as Velocity and FreeMarker.

Spring's AOP module. This module is the basis for developing your own aspects for your Spring-enabled application. Also AOP supports loose coupling of application objects like DI.

Page 112: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 112 of 141

WP1 Partners Public 29/08/2008

JDBC abstraction and the DAO module. This module abstracts away the boilerplate code so you can keep your database code clean and simple, and prevents problems that results from a failure to close database resources. Also it builds a layer of meaningful exceptions on top of the error messages given by several database servers. In addition, this module uses Spring's AOP module to provide transaction management services for objects in a Spring application.

Object-relational mapping (ORM) integration module. Spring's ORM support builds on the DAO support, providing a convenient way to build DAOs for several ORM solutions. But Spring doesn't attempt to implement its own ORM solution, Spring provides hooks into several popular ORM frameworks, including Hibernate, Java Persistence API, Java Data Objects, and iBATIS SQL Maps. Spring's transaction management supports each of these ORM frameworks as well as JDBC.

JEE Module. This layer provides integration with existing Java SE and EE technologies, for example:

o Java Management Extensions (JMX). Spring's JMX module makes it easy to expose your application's beans as JMX Mbeans, which makes possible to monitor and reconfigure a running application.

o Java EE Connector API (JCA). This module provides a standard way of integrating Java applications with a variety of enterprise information systems, including mainframes and databases.

o Java Message Service (JMS). This module helps you send messages to JMS message queues and topics. Also it helps you create message-driven POJOs that are capable of consuming asynchronous messages.

Spring's Web module. Spring provides two main modules for the web tier:

o Spring MVC framework. The Model/View/Controller (MVC) paradigm is a commonly accepted approach to building web applications such that the user interface is separate from the application logic. Because of this, Spring also comes with its own very capable MVC framework that promotes Spring's loosely coupled techniques in the web layer of an application.

o Spring Portlet MVC. This module builds on Spring MVC to provide a set of controllers that support Java's portlet API. This means you can also build web applications based in pages with Spring -that is, each request to the application results in a completely new page being displayed.

Spring also provides integration support with Apache Struts and Java-Server Faces (JSF).

15.1.2.2.2 Spring Core

Spring Framework is basically a lightweight dependency injection and aspect-oriented container and framework. The spring core technologic principles are the following:

Beans. Those objects that form the backbone of Spring application and that are managed by the Spring IoC container. Beans have a public constructor and getters and setter for its attributes.

POJO (Plain Old Java Object). This concept simply refers to the natural condition of a Java object. It is a new word for something old. The name is used to emphasize that the object in question is an ordinary Java Object, not a special

Page 113: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 113 of 141

WP1 Partners Public 29/08/2008

object. In practice, concepts ―bean‖ and ―POJO‖ are very close to: springs beans are built using POJOs.

Inversion of Control (IoC). IoC is a concept in software development, in which the control flow is inverted compared to the traditional interaction model. Informally IoC can be summarized with the Hollywood Principle: "don't call us, we will call you". In Spring, the IoC container is responsible of creating beans instances according to the configuration (usually in XML).

Dependency Injection (DI) is one technique that Spring offers to POJOs in order to provide loose coupling. Dependency injection is made for cases in which there are two or more classes that collaborate with each other to perform some business logic. Traditionally each object was responsible for obtaining its own references to the objects it collaborated with (its dependencies). With DI objects are given their dependencies at creation time by some external entity (Spring IoC container) that coordinates each object in the system. In other words, dependencies are injected into objects. So, DI means inversion of responsibility with regard to how an object obtains references to collaborating objects. The key of DI is loose coupling, because you can swap dependencies out from a previous interface to a different implementation without the depending object knowing the difference.

Aspect-oriented programming (AOP). Aspect-oriented programming enables you to capture functionality that is used throughout your application in reusable components. It is often defined as a programming technique that promotes separation of concers within a software system. AOP makes possible to modularize system services and then apply them declaratively to the components that they should affect. This results in components that are more cohesive and that focus on their own specific concerns, completely ignorant of any system services that may be involved. In summary, AOP enables you to centralize logic that would normally be scattered throughtout an application in one place-an aspect. When Spring wires your beans together, these aspects can be woven in at runtime, effectively giving the beans new behaviour.

15.1.2.2.3 Middle Tier Data Access

Transaction management. Traditionally, JEE developers have had two choices for transaction management: global or local transactions. Global transactions are managed by the application server (usually using the Java Transaction API, JTA [69]). Local transactions are resource-specific, e.g., a transaction associated with a JDBC connection. Both of them have problems. On one hand, global transaction the reusability of application code is limited because JTA is normally only available in an application server environment. On the other hand, with local transactions you cannot help ensure correctness across multiple resources. For solving this problem, Spring proposes two ways for transaction management:

o Declarative Transaction Management. This is the favorite option for Spring user, because it has the least impact on application code, and hence is most consistent with the ideals of a non-invasive lightweight container.

o Programmatic Transaction Management.

DAO (Data Access Object) support. Spring provides a translation from technology-specific exceptions like SQLException to its own exception class hierarchy with

the DataAccessException as the root exception (see the figure below).

Page 114: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 114 of 141

WP1 Partners Public 29/08/2008

Figure 44. DataAccessException hierarchy

In addition, Spring DAO provides a set of abstract classes with methods for providing the data source or any other configuration settings specific to the relevant data-access technology: o JdbcDaoSupport. Superclass for JDBC data access objects.

o HibernateDaoSupport. Superclass for Hibernate data access objects.

o JdoDaoSupport. Super class for JDO data access objects.

o JpaDaoSupport. Super class for JPA data access objects.

Data access using JDBC. Spring takes care of all low-level details of JDBC API. JdbcTemplate is the central class in the JDBC core package. It simplifies the use of JDBC since it handles the creation and release of resources, closing connections, and so on. It executes the core JDBC workflow like statement creation and execution, leaving application code to provide SQL and extract results. This class executes SQL queries, update statements or stored procedure calls, imitating iteration over ResultSets and extraction of returned parameter

values. Besides of JdbcTemplate, there are another templates to accessing data

using JDBC with Spring support:

o NamedParameterJdbcTemplate. Named parameters instead of the

traditional JDBC "?" place holders.

o SimpleJdbcTemplate. Taking advantage of some Java 5 features like

varargs, autoboxing and generics to provide an easier to use API.

o SimpleJdbcInsert and SimpleJdbcCall. Taking advantage of

database metadata to limit the amount of configuration needed.

o RDBMS Objects including MappingSqlQuery, SqlUpdate and

StoredProcedure - an approach where you create reusable and thread safe objects during initialization of your data access layer.

Object Relational Mapping (ORM) data access. Spring provides integration with Hibernate, JDO, Oracle TopLink, iBATIS SQL Maps and JPA, in terms of resource management, DAO implementation support, and transaction strategies.

15.1.2.2.4 Web Tier

Page 115: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 115 of 141

WP1 Partners Public 29/08/2008

Spring MVC. The Model/View/Controller (MVC) paradigm is a commonly accepted approach to building web applications such that the user interface is separate from the application logic. Some examples of MVC frameworks are Apache Struts, JSF, WebWork, and Tapestry. Even though Spring integrates with several popular MVC frameworks, it also comes with its own very capable MVC framework that promotes Spring‘s loosely coupled techniques in the web layer of an application [37]. Spring's Web MVC framework is designed around a DispatcherServlet that

dispatches requests to handlers (1). Then, the DispatcherServlet consults one or more handler mapping (2) for choosing the correct controller (3). The controller is in charge of process the request and performs some actions. Actually, a well-designed controller delegates responsibility for the business logic to one or more service objects. Typically, these results in some information that needs to be carried back to the user and displayed in the browser (this information is referred to as the model). The last thing that controller do is package up the model data in a ModelAndView object (4). The controller isn‘t coupled to any particular view.

The DispatcherServlet must ask for the ViewResolver to find the actual

view (5). The final step is the view implementation (typically a JSP) with the model data and the selected view (6).

Figure 45. Processing a request in Spring MVC

View technologies. Spring provides a low coupling between the view technologies and the controller-model using Spring MVC. In addition, Spring provides easily integration with several presentation technologies:

o JSP (Java Server Pages) and JSTL (Java Server Pages Standard Tag Library). The most commonly used view resolvers when developing with JSPs are the InternalResourceViewResolver and the

ResourceBundleViewResolver.

When using JSTL the view resolver will be JstlView. This class exposes JSTL-specific request attributes so that you can take advantage of JSTL‘s internationalization (I18N) support.

o Tiles. It is another view technology developed by the Apache Software Foundation. Apache Tiles is a templating framework built to simplify the development of web application user interfaces [70]. In Spring the Tiles configuration is done using TilesConfigurer.

o Velocity & FreeMarker. Velocity is an open source package developed by the Apache Software Foundation. It is a Java-based template engine that

Page 116: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 116 of 141

WP1 Partners Public 29/08/2008

provides a language to reference objects defined in Java code [71]. FreeMarker is a template engine, that is, a generic tool to generate text output [72]. Spring provides easy integration with both templates frameworks.

o XSLT (XSL Transformations). XSLT is a transformation language for XML defined by W3C. It is popular as a view technology within web applications.

o Document view (PDF/Excel). Spring makes it simple to generate a PDF document or an Excel spreadsheet dynamically from the model data. In order to use Excel views, Spring provides integration with Apache POI API [74]. For PDF generation, Spring provides integration with iText library [75]. Both libraries are included in the Spring distribution with dependencies.

o JasperReports. It is an open-source reporting engine that supports the creation of report designs using an easily understood XML file format [76]. JasperReports is capable of rendering reports output into four different formats: CSV, Excel, HTML and PDF.

Integration with other frameworks. Spring provides integration with third party web frameworks, such as:

o JSF (Java Server Faces) is the JCP's (Java Community Process) standard component-based, event-driven web user interface framework [77]. The key element in Spring's JSF integration is the JSF 1.1 VariableResolver mechanism. On JSF 1.2, Spring supports the

ELResolver mechanism.

o Struts. It is an open-source web application framework for developing Java-based web applications [78]. It was developed by Apache, and it uses and extends the Java Servlet API to encourage developers to adopt the MVC architecture. Due to the fact their early first release (June 2001), it has been widely adopted.

o WebWork. It is another Java web-application development framework. It was built specifically with developer productivity and code simplicity in mind [79].

o Tapestry. Apache Tapestry is an open-source framework for creating dynamic, robust, highly scalable web applications in Java [80].

Portlet MVC framework [81]. Many web applications are page based (each request to the application results in a completely new page). In contrast, portlet-based applications aggregate several bits of functionality on a single web page. Spring Portlet MVC builds on Spring MVC to provide a set of controllers that support Java‘s portlet API [37]. When the request is sent to the application from the portlet container (1), the DispatcherPortlet consults one or more handler mappings (2). Portlet handler mappings are similar to Spring MVC handler mappings, except that they map portlet modes and parameters instead of URL patterns to controllers. Once a suitable controller has been chosen, DispatcherPortlet

sends the request straight away to the controller for processing (3). The controller will return a ModelAndView object (4) back to DispatcherPortlet. This is the

same ModelAndView that would be returned from a Spring MVC controller. At this

point, DispatcherPortlet must look up the actual view implementation by its

logical view name by consulting a ViewResolver (5). The final stop for a render request is at the actual view implementation (6). The view will use the model data contained in the request to produce output in the portlet‘s space within the portal page.

Page 117: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 117 of 141

WP1 Partners Public 29/08/2008

Figure 46. Processing a request in Portlet MVC

15.1.2.2.5 Integration

Integration has become a crucial topic in the software community. Reusing different technologies has potential benefits, which include increased product quality and decreased product cost and schedule. For this reason, spring provides foster integration for a lot of technologies, mainly JEE or related. The following list is a summary of integrated technologies in Spring Framework 2.5 [38]:

Remoting. The concept ―remoting‖ groups the mechanisms for communication between two and more applications or systems. Spring Framework provides integration for the following remoting technologies:

o Remote Method Invocation (RMI).

o Spring's HTTP invoker. Spring provides Java serialization via HTTP.

o Hessian. A developer can transparently expose her services using the lightweight binary HTTP-based protocol provided by Caucho [57].

o Burlap. Burlap is Caucho's XML-based alternative to Hessian [58].

o JAX-RPC. Java API for XML-based RPC (J2EE 1.4's web service API) [59].

o JAX-WS. Java API for XML Web Services. Successor of JAX-RPC, as introduced in Java EE 5 and Java 6 [60].

o JMS. Java Message Service. Remoting using JMS as the underlying protocol [61].

Enterprise Java Beans (EJB). Spring is often considered an EJB replacement. This could be true; nevertheless Spring can be used in combination with EJBs. Spring provides facilities for connecting with EJB 2.x and 3 Local and remote Stateless Session Bean (SLSB).

Java Message Service (JMS). Spring provides a JMS [61] integration framework that simplifies the use of the JMS API and shields the user from differences between the JMS 1.0.2 and 1.1 APIs.

Java Management Extensions (JMX) [62]. Spring's JMX support provides four core features:

o The automatic registration of any Spring bean as a JMX MBean.

o A flexible mechanism for controlling the management interface of your beans.

Page 118: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 118 of 141

WP1 Partners Public 29/08/2008

o The declarative exposure of MBeans over remote, JSR-160 connectors.

o The simple proxying of both local and remote MBean resources.

JCA CCI. J2EE provides a specification to standardize access to enterprise information systems (EIS): the JCA (Java Connector Architecture) [64]. This specification is divided into several different parts: SPI (Service provider interfaces) that the connector provider must implement, and CCI (Common Client Interface) that an application can use to interact with the connector and thus communicate with an EIS. Spring CCI provides classes to access a CCI connector.

Email. Spring provides a helpful utility library for sending emails. Spring uses external dependencies for offering this feature: JavaMail [64] and JAF (Java Activation Framework) [65] library.

Scheduling and Thread Pooling. Spring provides integration with the following scheduling technologies: Timer (JDK since 1.3) and Quartz Scheduler [66].

Dynamic language support. Since Spring 2.0 introduces full support for using classes and objects that have been defined using the following dynamic language: JRuby, Groovy and BeanShell.

Annotations and Source Level Metadata Support. Java 5 provides standard metadata implementation (JSR-175), that is, annotations. Spring has specific Java 5 annotations for transactional demarcation, JMX, and aspects. In addition, Spring provides integration with other metadata support: XDoclet [67] and Jakarta Commons Attributes [68].

15.1.3 Goals and Expected benefits

This case study intends to achieve a simple goal: learn how to evolving Java EE 5 based application to Spring Framework. This task is not easy at all, due to the fact that both worlds are very huge, and therefore, it is very complex to create method for this refactoring. Nevertheless, the expected benefit of this study is collecting the needed knowledge for Java EE developers (de facto standard) to create/evolve their applications and becoming into Spring developers.

15.2 Solution

15.2.1 Approach

According to the state-of-art study (section before), Java EE 5 is a platform for server programming using Java language. This platform compromise a set of API‘s in several aspects in the application architecture. The most important API‘s grouped by functionality in Java EE 5 are the following:

Web:

o Servlet 2.5: Java components for processing HTTP request and responses.

o JSP 2.1: Text documents that generates HTML in response to requests.

o JSTL 1.2: Tag library with extra features for JSP: I18N, XML processing etc.

o JSF 1.2: Web application framework that enhance Sevlets, JSP and JSTL.

Web Services:

Page 119: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 119 of 141

WP1 Partners Public 29/08/2008

o JAX-WS 2.0: API for web services creation based on annotations.

o JAXB 2.0: API for mapping (marshal and unmarshal) Java classes to XML.

o SAAJ 1.3: API for sending XML documents over the Internet conforming to the SOAP 1.1 specification.

o stAX 1.0: API for reading and writing XML documents.

Enterprise Beans:

o EJB 3.0: Standard components to implement the business logic.

Persistence:

o JPA 1.0: API that provides relational persistence (ORM) for Java.

Security:

o JAAS 1.0: Java security framework for authentication and authorization.

Remoting:

o JMS 1.1: API for sending messages between Java systems.

Transactions:

o JTA 1.1: API for performing distributed transactions across multiple XA resources.

Resource locating:

o JNDI 1.2: API for discovering and lookup data and objects via a name.

EIS Integration:

o JCA 1.5: Java technology for connecting application servers and EIS.

With this API list in mind, now we want to find out how to perform the same functionality but using the Spring Framework. Note neither JDBC nor JMX aren‘t the list before, because these API now became part of Java SE 5 instead of Java EE 5.

The Spring Framework, as we have learned, is a full-stack Java/JEE application based on ÎoC principles, POJO based development and integration with existing technologies (not reinventing the wheel). Because of that, for refactoring Java EE applications to Spring Framework, we have two choices:

1. Integrate the Java EE API under Spring Framework.

2. Look for some alternatives technology under Spring Framework. These alternatives could be Spring natives or third-party library easily integrated using the Spring Framework.

With these assumptions, we finally want to fill the following table:

Java EE Spring Framework

Integration Alternatives

Web: Servlet, JSP, JSTL, JSF ? ? Web Services: JAX-WS, JAXB, SAAJ, stAX ? ? Enterprise: EJB ? ? Persistence: JPA ? ? Security: JAAS ? ? Remoting: JMS ? ?

Page 120: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 120 of 141

WP1 Partners Public 29/08/2008

Transactions: JTA ? ? Resource Locating: JNDI ? ? EIS Integration: JCA ? ?

15.2.2 Major Results

In this section we start to answer the proposed questions, using the information learned in the previous study of state-of-art before.

15.2.2.1 Web Tier in Java EE: Servlet, JSP, JSTL, JSF

15.2.2.1.1 Integration with Spring

In Spring the business-specific objects exist inside the Spring IoC container. In web applications, this container is called WebApplicationContext.

For the integration between the WebApplicationContext and a Java-based web

application, it is mandatory to declare a ContextLoaderListener in the standard

JEE servlet web.xml file. Optionally, we can specify the contextConfigLocation

context parameter for looking for the application context definition(s):

<listener>

<listener-class>

org.springframework.web.context.ContextLoaderListener

</listener-class>

</listener>

<context-param>

<param-name>contextConfigLocation</param-name>

<param-value>/WEB-INF/applicationContext.xml</param-value>

</context-param>

JSP and JSTL based pages now can easily access to the beans defined in applicacionContext.xml file. The snippet code for accessing to a bean from a JSP page is the following:

<%

WebApplicationContext ctx =

WebApplicationContextUtils.getWebApplicationContext(application);

HelloBean hello = (HelloBean) ctx.getBean("helloBean");

out.println(hello.sayHello());

%>

The easiest way to integrate one's Spring middle-tier with one's JSF web layer is to use the DelegatingVariableResolver class. This class is a variable resolver.

Rather than resolve variables only from among JSF‘s managed beans, DelegatingVariableResolver also looks in the Spring application context. It is

configured in faces-config.xml and it makes the resolving of Spring-managed beans transparent in JSF.

<application>

<variable-resolver>

org.springframework.web.jsf.DelegatingVariableResolver

</variable-resolver>

</application>

<managed-bean>

<managed-bean-name>helloManagedBean</managed-bean-name>

<managed-bean-class>

es.upm.dit.serious.jeespring.jsf.HelloManagedBean

Page 121: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 121 of 141

WP1 Partners Public 29/08/2008

</managed-bean-class>

<managed-bean-scope>session</managed-bean-scope>

<managed-property>

<property-name>helloBean</property-name>

<value>#{helloBean}</value>

</managed-property>

</managed-bean>

After that, in a JSP page (based on JSF), we could call a Spring bean like this:

<f:view>

<html>

<head>

<title>Hello JSF</title>

</head>

<body>

<h:outputText value="#{helloManagedBean.greeting}"/>

</body>

</html>

</f:view>

15.2.2.1.2 Alternatives

The Spring community offers new solutions in the web tier for Java enterprise development. These technologies are Spring MVC and Portlet MVC. In addition, there is and special Spring MVC controller called Spring Web Flow for creating web applications using navigation structures called flows.

In Spring MVC, apart from the WebApplicationContext configuration, it is

mandatory to configure the DispatcherServlet and the mappings for this servlet:

<servlet>

<servlet-name>hello</servlet-name>

<servlet-class>

org.springframework.web.servlet.DispatcherServlet

</servlet-class>

<load-on-startup>1</load-on-startup>

</servlet>

<servlet-mapping>

<servlet-name>hello</servlet-name>

<url-pattern>/hello/*</url-pattern>

</servlet-mapping>

The name given to the servlet is significant because it is used by DispatcherServlet to locate the Spring configuration file. In the example before,

the configuration file must be named hello-servlet.xml. In this file is configured

the handler mapping, the controller(s), and the view resolver (see Figure 45). One simple example is the following:

<bean id="viewResolver"

class="org.springframework.web.servlet.view.InternalResourceViewResolver">

<property name="prefix" value="/WEB-INF/jsp/" />

<property name="suffix" value=".jsp" />

</bean>

<bean name="helloController"

class="es.upm.dit.serious.jeespring.mvc.HelloController">

<property name="helloBean" ref="helloBean" />

</bean>

<bean id="simpleUrlMapping"

class="org.springframework.web.servlet.handler.SimpleUrlHandlerMapping">

Page 122: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 122 of 141

WP1 Partners Public 29/08/2008

<property name="mappings">

<props>

<prop key="/hello/sayhello">helloController</prop>

</props>

</property>

<property name="alwaysUseFullPath">

<value>true</value>

</property>

</bean>

In the example before the view resolver is used for JSP pages. The most commonly used view resolvers for JSPs are the InternalResourceViewResolver and the

ResourceBundleViewResolver. When we use JSTL for the view, we need a

special view class, the JstlView:

<bean id="viewResolver"

class="org.springframework.web.servlet.view.InternalResourceViewResolver">

<property name="viewClass"

value="org.springframework.web.servlet.view.JstlView" />

<property name="prefix" value="/WEB-INF/jsp/" />

<property name="suffix" value=".jsp" />

</bean>

In addition, Spring Framework provides integration with others web frameworks, such as:

Apache Struts [78]. Open-source web framework widely adopted.

WebWork [79]. Open-source web framework focused on productivity and simplicity.

Apache Tapestry [80]. Open-source web framework (dynamic, scalable).

And also provides integration with existing view technologies:

Apache Tiles [70]. Templating framework built to simplify the development of web applications.

Apache POI [74]. Java API to access Microsoft format files.

Apache Velocity [71]. Java-based template engine that provides a template language to reference objects defined in Java code.

FreeMarker [72]. Another open-source template engine.

XSLT. XSL Transformations.

iText [75]. Library for generating PDF in Java.

JasperReport [76]. Open-source reporting engine.

15.2.2.2 Web Services Tier in Java EE: JAX-WS, JAXB, stAX, SAAJ

15.2.2.2.1 Integration with Spring

JAX-WS 2.0

Spring 2.5 fully supports JAX-WS 2.0/2.1. For exposing servlet-based web services using JAX-WS in Spring Framework you have to extend Spring's SpringBeanAutowiringSupport class and implement the business logic there,

Page 123: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 123 of 141

WP1 Partners Public 29/08/2008

usually delegating the call to the business layer. The Spring 2.5's @Autowired

annotation expresses dependencies on Spring-managed beans.

import org.springframework.web.context.support.SpringBeanAutowiringSupport;

@WebService(serviceName="AccountService")

public class AccountServiceEndpoint extends SpringBeanAutowiringSupport {

@Autowired

private AccountService biz;

@WebMethod

public void insertAccount(Account acc) {

biz.insertAccount(acc);

}

@WebMethod

public Account[] getAccounts(String name) {

return biz.getAccounts(name);

}

}

For accessing web services using JAX-WS, Spring provides two factory beans to create JAX-WS web service proxies, namely LocalJaxWsServiceFactoryBean

and JaxWsPortProxyFactoryBean. This example uses the latter to create a proxy

for the AccountService endpoint:

<bean id="accountWebService"

class="org.springframework.remoting.jaxws.JaxWsPortProxyFactoryBean">

<property name="serviceInterface" value="example.AccountService"/>

<property name="wsdlDocumentUrl"

value="http://localhost:8080/account/services/accountService?WSDL"/>

<property name="namespaceUri"

value="http://localhost:8080/account/services/accountService"/>

<property name="serviceName" value="AccountService"/>

<property name="portName" value="AccountPort"/>

</bean>

JAXB 2.0

Spring-OXM (object-XML mapping) is a subproject of Spring-WS that provides an abstraction layer over several popular OXM solutions (JAXB, Castor XML, and so on)

The central pieces in Spring-OXM are org.springframework.oxm.Marshaller

and org.springframework.oxm.Unmarshaller interfaces. Marshallers generate

XML from Java objects and unmarshallers construct Java object from XML. Spring-OXM comes with several implementations for marshallers and unmarshallers.

Spring‘s class org.springframework.oxm.jaxb.Jaxb2Marshaller implements

both Marshaller and Unmarshaller. Example:

<beans>

<bean id="jaxb2Marshaller"

class="org.springframework.oxm.jaxb.Jaxb2Marshaller">

<property name="classesToBeBound">

<list>

<value>org.springframework.oxm.jaxb.Flight</value>

<value>org.springframework.oxm.jaxb.Flights</value>

</list>

</property>

<property name="schema"

value="classpath:org/springframework/oxm/schema.xsd"/>

</bean>

Page 124: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 124 of 141

WP1 Partners Public 29/08/2008

</beans>

SAAJ 1.3

Spring-WS provides a factory to create empty message, or read a message based on an input stream: the WebServiceMessageFactory. There are two concrete implementations of this class one is based on SAAJ and the other based on Axis 2's AXIOM. The implementation for SAAJ is called SaajSoapMessageFactory.

WebServiceTemplate is the central class in the Spring-WS client API. Sending messages to a web service involves producing SOAP envelopes and communications boilerplate code that is pretty much the same for every web service client. When sending messages to a Spring-WS client, WebServiceTemplate handles the grunt

work:

<bean id="webServiceTemplate"

class="org.springframework.ws.client.core.WebServiceTemplate">

<property name="messageFactory">

<bean class="org.springframework.ws.soap.saaj.SaajSoapMessageFactory"/>

</property>

<property name="messageSender" ref="messageSender"/>

</bean>

stAX 1.0

Spring-WS provides XML API support: DOM, SAX, StAX, JDOM, dom4j, and XOM. Spring-WS defines several abstract classes from which message endpoints can be created. Spring‘s AbstractStaxEventPayloadEndpoint class handles message

payloads using event-based StAX. The other class in Spring support for stAx is AbstractStaxStreamPayloadEndpoint, class that handles message payloads using streaming StAX

15.2.2.2.2 Alternatives

Alternatives to JAX-WS 2.0

XFire allows exporting Spring-managed beans as a web service, through built-in Spring support. XFire is a lightweight SOAP library, hosted by Codehaus. For integrating XFire in a Spring application, we need the following configuration in web.xml:

<servlet>

<servlet-name>xfire</servlet-name>

<servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-

class>

</servlet>

<context-param>

<param-name>contextConfigLocation</param-name>

<param-value>classpath:org/codehaus/xfire/spring/xfire.xml</param-value>

</context-param>

<listener>

<listener-

class>org.springframework.web.context.ContextLoaderListener</listener-class>

</listener>

In addition it is necessary a configuration file named xfire-servlet.xml:

<beans>

<bean name="/Echo"

class="org.codehaus.xfire.spring.remoting.XFireExporter">

Page 125: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 125 of 141

WP1 Partners Public 29/08/2008

<property name="serviceInterface" value="org.codehaus.xfire.spring.Echo"/>

<property name="serviceBean">

<bean class="org.codehaus.xfire.spring.EchoImpl"/>

</property>

<!-- the XFire bean is defined in the xfire.xml file -->

<property name="xfire" ref="xfire"/>

</bean>

</beans>

Alternatives to JAXB 2.0

As said before, Spring-OXM provides integration with the most popular OXM solution. Apart from JAXB v2, there are others alternatives:

Castor XML [92]: org.springframework.oxm.castor.CastorMarshaller

JAXB v1: org.springframework.oxm.jaxb.Jaxb1Marshaller

JiBX [93]: org.springframework.oxm.jibx.JibxMarshaller

XMLBeans [94]: org.springframework.oxm.xmlbeans.XmlBeansMarshaller

XStream [95]: org.springframework.oxm.xstream.XStreamMarshaller

Alternatives to SAAJ 1.3

The AxiomSoapMessageFactory uses the AXis 2 Object Model [96] to create

SoapMessage implementations:

<bean id="messageFactory"

class="org.springframework.ws.soap.axiom.AxiomSoapMessageFactory">

<property name="payloadCaching" value="true"/>

</bean>

Alternatives to stAX 1.0

The alternatives to stAX proposed by Spring are the following:

DOM: AbstractDomPayloadEndpoint handles message payloads as DOM

elements.

SAX [97]: AbstractSaxPayloadEndpoint handles message payloads through

a SAX.

JDOM [98]: AbstractJDomPayloadEndpoint handles message payloads as

JDOM elements.

dom4j [99]: AbstractDom4jPayloadEndpoint handles message payloads as

dom4j elements.

XOM [100]: AbstractXomPayloadEndpoint handles message payloads as XOM elements.

15.2.2.3 Enterprise in Java EE: EJB

15.2.2.3.1 Integration with Spring

As a lightweight container, Spring is often considered an EJB replacement. However, it is important to note that using Spring does not prevent you from using EJBs. In fact,

Page 126: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 126 of 141

WP1 Partners Public 29/08/2008

Spring makes it much easier to access EJBs and implement EJBs and functionality within them.

Spring provides support for the EJB 2.x specification but doesn‘t provide any direct support for the EJB 3 specification. However, there is a Spring add-on that makes it possible to use EJB 3 annotations to perform dependency injection and AOP in Spring: Spring Pitchfork.

15.2.2.3.2 Alternatives

In short, the EJB 3 programming model is a POJO-based model. Pitchfork is an add-on for Spring that supports EJB 3 annotations. These annotations are the following:

@ApplicationException Declares an exception to be an application exception,

which, by default, does not roll back a transaction.

@AroundInvoke Declares a method to be an interceptor method.

@EJB Declares a dependency to an EJB.

@ExcludeClassInterceptors: Declares that a method should not be

intercepted by a class interceptor.

@ExcludeDefaultInterceptors: Declares that a method should not be

intercepted by a default interceptor.

@Interceptors: Specifies one or more interceptors classes to associate with a

bean class or method.

@PostConstruct: Specifies a method to be executed after a bean is constructed and all dependency injection is done to perform initialization.

@PreDestroy: Specifies a method to be executed prior to bean being removed from the container.

@Resource: Declares a dependency to an external resource.

@Stateless: Declares a bean to be a stateless session bean.

@TransactionAttribute: Specifies that a method should be invoked within a

transaction context.

Pitchfork represents a choice for Spring developers. A developer can either use conventional Spring dependency injection and AOP, or use EJB 3 annotations for dependency injection and AOP with Pitchfork.

15.2.2.4 Persistence in Java EE: JPA

15.2.2.4.1 Integration with Spring

The central element of Spring-JPA integration is a template class. JpaTemplate,

specifically, is a template class that wraps a JPA EntityManager. The following XML configures a JPA template in Spring:

<bean id="jpaTemplate" class="org.springframework.orm.jpa.JpaTemplate">

<property name="entityManagerFactory" ref="entityManagerFactory" />

</bean>

The JPA specification defines two kinds of entity managers:

Page 127: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 127 of 141

WP1 Partners Public 29/08/2008

Application-managed: entity managers are created when an application directly requests an entity manager from an entity manager factory.

Container-managed: entity managers are created and managed by a Java EE container.

Each entity manager factory is produced by a corresponding Spring factory bean:

LocalEntityManagerFactoryBean produces an application-managed EntityManagerFactory.

LocalContainerEntityManagerFactoryBean produces a container-managed EntityManagerFactory.

15.2.2.4.2 Alternatives

For accessing data, Spring uses a powerful design pattern: the Template Method pattern. In section before, we have seen that the template for JPA is org.springframework.orm.jpa.JpaTemplate. In addition, Spring provides integration with others ORM technologies using other templates:

Hibernate: org.springframework.orm.hibernate3.HibernateTemplate

Java Data Objects (JDO): org.springframework.orm.jdo.JdoTemplate

Oracle Toplink: org.springframework.orm.toplink.TopLinkTemplate

iBatis SQL Maps: org.springframework.orm.ibatis.SqlMapClientTemplate

15.2.2.5 Security in Java EE: JAAS

15.2.2.5.1 Integration with Spring

Spring Security provides a package able to delegate authentication requests to the Java Authentication and Authorization Service (JAAS).

The Spring‘s class JaasAuthenticationProvider attempts to authenticate a

user‘s principal and credentials through JAAS:

<bean id="jaasAuthenticationProvider"

class="org.springframework.security.providers.jaas.JaasAuthenticationProvider

">

<property name="loginConfig" value="/WEB-INF/login.conf"/>

<property name="loginContextName" value="JAASTest"/>

<property name="callbackHandlers">

<list>

<bean

class="org.springframework.security.providers.jaas.JaasNameCallbackHandler"/>

<bean

class="org.springframework.security.providers.jaas.JaasPasswordCallbackHandle

r"/>

</list>

</property>

<property name="authorityGranters">

<list>

<bean

class="org.springframework.security.providers.jaas.TestAuthorityGranter"/>

</list>

</property>

</bean>

Page 128: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 128 of 141

WP1 Partners Public 29/08/2008

The JAAS package for Spring Security provides two default callback handlers, JaasNameCallbackHandler and JaasPasswordCallbackHandler. Each of

these callback handlers implements JaasAuthenticationCallbackHandler.

15.2.2.5.2 Alternatives

In Spring, the authentication manager is responsible for determining who you are. Spring‘s ProviderManager is an authentication manager implementation that delegates responsibility for authentication to one or more authentication providers.

Figure 47. Spring's authentication managers

Apart from JAAS, Spring Security currently supports authentication integration with all of these technologies:

HTTP BASIC authentication headers (an IEFT RFC-based standard).

HTTP Digest authentication headers (an IEFT RFC-based standard).

HTTP X.509 client certificate exchange (an IEFT RFC-based standard).

LDAP.

Form-based authentication (for simple user interface needs).

OpenID authentication.

Computer Associates Siteminder.

JA-SIG CAS (Central Authentication Service).

Transparent authentication context propagation for RMI and Spring HTTPInvoker.

Automatic "remember-me" authentication.

Anonymous authentication.

Run-as authentication.

Container integration with JBoss, Jetty, Resin and Tomcat.

Page 129: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 129 of 141

WP1 Partners Public 29/08/2008

15.2.2.6 Remoting in Java EE: JMS

15.2.2.6.1 Integration with Spring

JMS provides Java applications with the option of communicating asynchronously. When messages are sent, the client does not have to wait for the service to process the message or even for the message to be delivered.

There are two main concepts in JMS: message brokers and destinations. When an application sends a message, it hands it off to a message broker. A message broker is JMS‘s answer to the post office. The message broker will ensure that the message is delivered to the specified destination, leaving the sender free to go about other business.

JmsTemplate is Spring‘s answer to manage JMS. It takes care of creating a

connection, obtaining a session, and the actual sending and receiving of messages. In addition JmsTemplate handle any clumsy JMSException. The declaration is as

follows:

<bean id="jmsTemplate" class="org.springframework.jms.core.JmsTemplate">

<property name="connectionFactory" ref="connectionFactory" />

</bean>

15.2.2.6.2 Alternatives

Spring provides several choices of communicating asynchronously:

Lingo [101] is a lightweight POJO based remoting and messaging library based on Spring's Remoting which extends it to support JMS.

Apache ActiveMQ [102] is an open source message broker which fully implements the JMS 1.1.

In addition to JMS, Spring provides synchronous remoting capabilities with RMI and Hessian [57], Burlap [58] and Spring‘s HTTP invoker.

15.2.2.7 Transactions in Java EE: JTA

15.2.2.7.1 Integration with Spring

Spring does not directly manage transactions. Instead, it comes with a selection of transaction managers that delegate responsibility for transaction management to a platform-specific transaction implementation provided by either JTA or the persistence mechanism.

Page 130: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 130 of 141

WP1 Partners Public 29/08/2008

Figure 48. Transaction managers in Spring

JtaTransactionManager is the Spring transaction manager for JTA. It delegates

transaction management responsibility to a JTA implementation:

<bean id="transactionManager"

class="org.springframework.transaction.jta.JtaTransactionManager">

<property name="transactionManagerName" value="java:/TransactionManager" />

</bean>

15.2.2.7.2 Alternatives

The other transaction manager supported by Spring are the following:

JDBC: org.springframework.jdbc.datasource.DataSourceTransactionMana

ger

Hibernate: org.springframework.orm.hibernate.HibernateTransactionManager

JDO: org.springframework.orm.jdo.JdoTransactionManager

JPA: org.springframework.orm.jpa.JpaTransactionManager

15.2.2.8 Resource Locating in Java EE: JNDI

15.2.2.8.1 Integration with Spring

Spring‘s JndiObjectFactoryBean performs a bridge between JNDI and DI. It will

wire an object retrieved from JNDI and it will become a bean inside the Spring IoC container.

<bean id="dataSource" class="org.springframework.jndi.JndiObjectFactoryBean">

<property name="jndiName" value="jdbc/RantzDatasource" />

</bean>

15.2.2.8.2 Alternatives

Page 131: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 131 of 141

WP1 Partners Public 29/08/2008

In Spring, every bean inside the IoC container has a name (id). This name is used for wiring beans. For this reason, it is not necessary a technology similar to JNDI in the Spring context.

15.2.2.9 EIS Integration in Java EE: JCA

15.2.2.9.1 Integration with Spring

Spring provides classes to access a JCA CCI connector. In order to make connections to the EIS, it is needed a ConnectionFactory from the application server (managed mode), or from Spring if (a non-managed mode).

<bean id="eciManagedConnectionFactory"

class="com.ibm.connector2.cics.ECIManagedConnectionFactory">

<property name="serverName" value="TXSERIES"/>

<property name="connectionURL" value="tcp://localhost/"/>

<property name="portNumber" value="2006"/>

</bean>

<bean id="eciConnectionFactory"

class="org.springframework.jca.support.LocalConnectionFactoryBean">

<property name="managedConnectionFactory"

ref="eciManagedConnectionFactory"/>

</bean>

15.2.2.9.2 Alternatives

Spring Integration provides a wide variety of configuration options including annotations, XML, and so on. This Spring module can be considered as a Enterprise Application Integration (EAI) solution.

In Spring Integration, a Message is a generic wrapper for any Java object combined with metadata used by the framework while handling that object. The conversion of Objects to Messages is performed using Spring‘s MessageCreator. There are various adapters in Spring Integration: JMS, RMI, HttpInvoker, File, FTP, Mail, Web Service, Stream, and ApplicationEvent

15.2.3 Success Indicators

Now, we are able to refill the proposed table with results:

Java EE Spring Framework

Integration Alternatives Web: Servlet, JSP, JSTL, JSF

Yes Spring MVC, Portlet MVC, Spring Web Flow, Struts, WebWork, Tapestry, Tiles, Velocity, XSLT, FreeMarker, POI, iText, JasperReport,.

Web Services: JAX-WS, JAXB, SAAJ, stAX

Yes XFire, Castor XML, JAXB, JiBX, XMLBeans, XStream, AXis 2 OM, DOM, SAX, JDOM, dom4j, XOM.

Enterprise: EJB No (only EJB2.x) Pitchfork. Persistence: JPA Yes Hibernate, Toplink, JDO, iBatis SQL

Maps. Security: JAAS Yes BASIC, Digest, X.509, LDAP, Form-

based, OpenID, Computer Associates Siteminder, JA-SIG CAS, Transparent authentication, Remember-me, Run-as, Anonymous, Container integration.

Remoting: JMS Yes Lingo, ActiveMQ, HTTP invoker, RMI,

Page 132: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 132 of 141

WP1 Partners Public 29/08/2008

Hessian, Burlap. Transactions: JTA Yes Hibernate, JDO, JPA, JDBC. Resource Locating: JNDI Yes IoC container. EIS integration: JCA Yes (JCA CCI) Spring Integration.

This table is a brief summary than could be useful for Java EE and Spring developers who need to know how to create Spring applications, or refactoring existing Java EE systems to Spring.

The best way for learning this kind of technologies is watching them in action. The simplest example for every new technology usually is the ―Hello World‖ application. For this reason, in this case study we have developed the Hello World sample for the most important Spring integration and alternatives proposed in the table before.

15.3 Conclusion

15.3.1 Summary

Java enterprise development is a huge and varying domain in software engineering. The standard the facto for years has been Java Enterprise Edition (Java EE or JEE), so there is a lot of JEE-based systems and developers. Currently there are alternatives to JEE for enterprise development. There are many open sources technologies and frameworks for every aspect in this kind of software systems. One of the main successful alternatives is the Spring Framework.

Spring Framework is a Java/JEE application framework for developing Java systems. Spring is based in principles like inversion of control (IoC) and aspect-oriented programming (AOP). It also provides easily integration with the main existing technologies in enterprise development (not reinventing the wheel).

This case study presents how Spring Framework could perform a real alternative to Java EE in the enterprise development. We have analyzed in detail both technologies. For every specific aspect of Java EE we have looked for the Spring equivalent. There is always a two-fold solution when we try to do the same in Spring with a Java EE approach:

Integrate the specific Java EE technology under Spring.

Use an alternative technology under Spring.

We have selected the most important aspects in Java EE development, i.e., web, web services, enterprise beans, persistence, security, remoting, transactions, resource locating, and EIS integration. For any single aspect, there is always a way for integrating the technology under Spring or some alternative (usually several).

This case study presents a complete approach in Spring enterprise development instead of Java EE. The results of this study can be used for enterprise developers for learning how to develop Spring-based systems.

15.3.2 Lessons Learned

This study has shown some important lessons in the domain of enterprise development:

Java EE has been the standard the facto in enterprise development but nowadays there are real alternatives.

Page 133: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 133 of 141

WP1 Partners Public 29/08/2008

Spring Framework is the main alternative to Java EE: it is an advanced open source framework in Java/JEE development.

The Spring Framework strong points are the following:

o Low coupling: IoC and AOP

o Freedom: easily integration with existing technologies.

The are two possibilities for developing Spring systems reusing the Java EE principles:

o Integration: reuse the specific Java EE technology under Spring.

o Spring alternative: chose one alternative and use it under Spring.

15.3.3 Final Recommendations

Knowledge is the most important factor for making right decisions. Enterprise development domain is an actually huge and complex world, due to the fact that there are a lot of technologies and solutions available. For this reason, it is becoming harder and harder to know what technology will be the best in each situation. Enterprise developers must renew their knowledge continuously

When an enterprise software development team has to choose a specific technology, they always have to think about the alternatives, pros and cons, and so on. This study have probed that Spring could be a real alternative to Java EE. That is true, but there is other thing to keep in mind: Java EE is still the main platform in Java enterprise development and maybe it could be the best choice depending on the system to be developed. The last advice for choosing between Spring or Java EE is to keep in mind the following:

Spring is flexible platform, specially oriented to the easily development and open-source technologies integration.

Java EE is perhaps recommendable in large enterprise systems. The EJB container in commercial application servers allows a great scalability.

Page 134: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 134 of 141

WP1 Partners Public 29/08/2008

16 Glossary

C#: One of .NET’s programming languages, roughly a synthesis of Java and C++

COM: Microsoft’s Component Object Model

CT: Computed Tomography, a medical imaging method employing tomography on a large series of X-RAY images.

FSE: Field Service Engineer, the person that services a system in the hospitals.

FSF: Field Service Framework, an application framework used for performing service to the system. More functionality is made available by developing plug-ins.

GUI: Graphical User Interface

IDE: Integrated Development environment, e.g. Visual Studio or Eclipse

IP: Image Processing; algorithm that is able to improve the quality of an image (e.g. by removing noise) or to extract valuable (diagnostic) information from an image.

IQ: Image Quality: Subjective measure of the quality of an image. Quality might refer to appreciation or to the task-oriented.

J#: One of .NET‘s languages, almost the same as the Java language.

Java: Sun‘s Java platform, including the Java language and libraries.

MFC: Microsoft Foundation Classes, is a library that wraps portions of the Windows API in C++ classes10 including functionality that enables them to use a default application framework.

MIP: The Medical Imaging Platform for all imaging systems within Philips Healthcare

MRI: Magnetic Resonance Imaging, a medical imaging method to visualize the structure of the body.

.NET: Microsoft‘s platform to build applications on Windows based PCs, including several programming languages and libraries

STT: System Test and Tool, A suite of test procedures and tools used for diagnosing the MRI system.

VT: Virtual Terminal, allows host terminals on a multi-user network to interact with other hosts regardless of terminal type and characteristics.11 Virtual Terminal Digital 220 (VT-220) is a terminal manufactured by Digital Equipment Corporation.

10 http://en.wikipedia.org/wiki/Microsoft_Foundation_Class_Library 11 http://en.wikipedia.org/wiki/Virtual_terminal

Page 135: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 135 of 141

WP1 Partners Public 29/08/2008

Platform: Set of (software) subsystems and interfaces that form a common structure from which a set of derivative products can be efficiently developed and produced.

Refactoring: The process of changing a software system in such a way that it does not alter the external behavior of the code, yet improves its internal structure.

Reference Architecture: Captures the high level design and the main guiding development principles of a software product line. The principles are the solution for one or more concerns dealing with quality. There are other, more instrumental, definitions in literature.

Reference Model: High-level readable implementation of an IP algorithm, with no concessions on the accuracy of the calculations that can be used as an executable specification.

Service: A service is seen as a function that is well-defined, self-contained and does not depend on the context or state of other services.

Page 136: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 136 of 141

WP1 Partners Public 29/08/2008

17 References

[1] A metrics suite for object oriented design, Chidamber S.R, Kemerer C.F. MIT, Cambridge, MA, June 1994, Available at: http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=295895

[2] Code and Design Metrics for Object-Oriented Systems, Lindroos Jaana, University of Helsinki, December 2004, Available at: http://www.cs.helsinki.fi/u/paakki/Lindroos.pdf

[3] Columbus/CAN: Columbus website, Available at: http://www.frontendart.com/products_col.php

[4] FrontEndART Monitor website, Available at: http://www.frontendart.com/Monitor-1.0-UsersGuide.pdf

[5] Serious D1.1 – State of The Art Report, 2006

[6] Bart van Rompaey & Matthias Rieger (2008). The Refactoring Handbook. ITEA SERIOUS deliverable D1.3.

[7] Rob Albers, Marcel Boosten & Peter H.N. de With (2008). Options for New Real-time Image-processing Architectures in Cardio-Vascular Systems, Proceedings of the SPIE.

[8] Bas Buunen, Claudio Riva & Gerard Schouten (2008). SERIOUS Case Studies. ITEA SERIOUS deliverable D1.4.

[9] Gerard Schouten, Tineke de Bunje & Rob van Ommering (2006). Variability of the Philips Medical Workspot. Philips Software Conference.

[10] Erik Oerlemans & Andre Postma (2008). CXA2007 Reference Architecture. Philips report (internal deliverable), forthcoming.

[11] Bart van Rompaey & Matthias Rieger (2008). The Refactoring Handbook (draft version). ITEA SERIOUS deliverable D1.3.

[12] I. Hammouda and K. Koskimies, Concern-Based Mining of Heterogeneous Software Repositories, In Proc.of the ICSE workshop on Mining Software Repositories, 2006, pp. 80-86.

[13] Integrated Architecting Environment INARI, practise.cs.tut.fi, Tampere University of Technology, Practise Research Group on Software Engineering, 2007.

[14] T. Reinikainen, Concern manipulation toolset, MSc thesis, Tampere University of Technology, 2007.

[15] T. Reinikainen, I. Hammouda, K. Koskimies and T. Systä, Software Comprehension through Concern-based Queries, ICPC, 2007, a short paper.

[16] C. Riva, P. Selonen, T. Systä, A.-P. Tuovinen, J. Xu, and Y. Yang, Establishing a Software Architecting Environment, In The 4th Working IEEE/IFIP

Page 137: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 137 of 141

WP1 Partners Public 29/08/2008

Conference on Software Architecture (WICSA 2004), 2004.

[17] The Eclipse Foundation, http://www.eclipse.org/, 2006.

[18] ―Towards a Framework for Program Understanding‖, Scott R. Tilley, Dennis B. Smith, Santanu Paul, wpc, p. 19, 4th International Workshop on Program Comprehension (WPC '96), 1996.

[19] ―Symphony: View-Driven Software Architecture Reconstruction‖, Van Deursen, A, Hofmeister, C, Koschke, R, Moonen, L, & Riva, C. Pages 122–132 of: Proc. of 4th Working IEEE / IFIP Conference on Software Architecture (WICSA 2004), 12-15 June 2004, Oslo, Norway. IEEE Computer Society, 2004.

[20] ―Principles of Software Engineering and Design‖. Zelkowitz, M., Shaw, A. & Gannon, J. Prentice-Hall, 1979.

[21] ―Software Architecture Reconstruction‖, Krikhaar, R., Ph.D. Thesis, University of Amsterdam, 1999.

[22] ―Contribution to Quality-driven Evolutionary Software Development Process for Service-Oriented Architecture‖, Arciniegas, J.L, Ph. D. Thesis, Polytechnic University of Madrid, 2006.

[23] ―Moose: an Extensible Language-Independent Environment for Reengineering Object-Oriented Systems‖, Ducasse, S., Lanza, M. & Tichelaar, S. Proceedings of CoSET '00 (2nd International Symposium on Constructing Software Engineering Tools), June 2000.

[24] ―Architecture Reconstruction Guidelines, 2nd Edition‖, Kazman, R. O‘Brien, L.and Verhoef, C. CMU/SEI-2002-TR-034, 2002.

[25] ―Polymetric Views-A Lightweight Visual Approach to Reverse Engineering‖, Lanza, Michele, and Ducasse, Stphane. IEEE Trans. Softw. Eng., 29(9), 782–795. 2003

[26] Jude (Java and UML Developer Environment), a Java UML modeling tool. http://jude.change-vision.com

[27] Omondo Eclipse UML Studio, an Eclipse plug-in for UML modelling. http://www.omondo.com

[28] Mehregani, A. & Mehregani D., ―Gnireenigne Esrever Fo Tra Enif Eft, or the Fine Art of Reverse Engineering‖, EclipseReview Magazine, Spring 2006 issue. Available at http://www.eclipsereview.org

[29] Eclipse TPTP (Test and Performance Tools Project), an Eclipse Top-level project. http://www.eclipse.org/tptp

[30] ―Service-Oriented Architecture: Concepts, Technology, and Design‖. Erl, T. Upper Saddle River: Prentice Hall, 2005.

[31] ―About the OSGi platform‖, The OSGi Alliance. Technical Whitepaper, 2005.

[32] ―Eclipse Rich Client Platform: Designing, Coding, and Packaging Java

Page 138: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 138 of 141

WP1 Partners Public 29/08/2008

Applications‖. McAffer, Jeff., Lemieux, J.M. Addison Wesley Professional, 2005.

[33] ―Listeners Considered Harmful: The Whiteboard Pattern‖. The OSGi Alliance. Technical Whitepaper, 2004.

[34] Java Enterprise Edition. Sun Microsystems. 2008. http://java.sun.com/javaee/

[35] Spring Framework. SpringSource. 2008. http://www.springframework.org/

[36] SpringSource. 2008. http://www.springsource.com/

[37] Spring in Action, 2nd edition. Craig Walls. Manning Publications. ISBN 1-933988-13-4. August 2007.

[38] Spring Framework 2.5 reference. Rod Johnson, Juergen Hoeller e.a. 2008. http://static.springframework.org/spring/docs/2.5.x/reference/index.html

[39] The Java EE 5 Tutorial. Eric Jendrock, Jennifer Ball e.a. September 2007. http://java.sun.com/javaee/5/docs/tutorial/doc/

[40] Expert One-on-One J2EE Design and Development. Rod Johnson. Wrox Ed. ISBN: 978-0-7645-4385-2. October 2002.

[41] Apache License, Version 2.0. Apache Software Foundation. January 2004. http://www.apache.org/licenses/LICENSE-2.0.html

[42] Spring projects. 2008. http://www.springframework.org/projects

[43] Spring Web Flow. Keith Donald (SpringSource), Erwin Vervaet (Ervacon). 2008. http://www.springframework.org/webflow

[44] Spring Web Services. 2008. http://static.springframework.org/spring-ws/site/

[45] Spring Security (Acegi Security). 2008. http://acegisecurity.org/

[46] Spring Dynamic Modules for OSGi. 2008. http://www.springframework.org/osgi

[47] Spring Batch. 2008. http://static.springframework.org/spring-batch/

[48] Spring Integration. 2008. http://www.springframework.org/spring-integration

[49] Spring LDAP. 2008. http://www.springframework.org/ldap

[50] Spring IDE. 2008. http://springide.org/

[51] Spring Modules. 2008. https://springmodules.dev.java.net/

[52] Spring Java Configuration. 2008. http://www.springframework.org/javaconfig

[53] Spring Rich Client Project. 2008. http://www.springframework.org/spring-rcp

[54] Spring .NET. 2008. http://www.springframework.org/spring-rcp

[55] Spring BeanDoc. 2008. http://spring-beandoc.sourceforge.net/

Page 139: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 139 of 141

WP1 Partners Public 29/08/2008

[56] Enterprise Integration Patterns. 2008. http://www.eaipatterns.com/

[57] Hessian binary web service protocol. 2008. http://hessian.caucho.com/

[58] Burlap. http://www.caucho.com/resin-3.0/protocols/burlap.xtp

[59] JAX-RPC. Java API for XML-based RPC. 2008. https://jax-rpc.dev.java.net/

[60] JAX-WS. Java API for XML Web Services. 2008. https://jax-ws.dev.java.net/

[61] JMS. Java Message Service. 2008. http://java.sun.com/products/jms/

[62] JMX. Java Management Extensions. 2008. http://java.sun.com/javase/technologies/core/mntr-mgmt/javamanagement/

[63] JCA. J2EE Connector Architecture. 2008. http://java.sun.com/j2ee/connector/

[64] JavaMail. 2008. http://java.sun.com/products/javamail/

[65] JAF. Java Activation Framework. 2008. http://java.sun.com/javase/technologies/desktop/javabeans/jaf/index.jsp

[66] Quartz. Open source job scheduling system for J2EE/J2SE systems. http://www.opensymphony.com/quartz/

[67] XDoclet. Attribute-Oriented Programming. 2008. http://xdoclet.sourceforge.net/

[68] Jakarta Commons Attributes. 2008. http://commons.apache.org/attributes/

[69] JTA. Java Transaction API. 2008. http://java.sun.com/products/jta/

[70] Apache Tiles. 2008. http://tiles.apache.org/

[71] Apache Velocity. 2008l http://velocity.apache.org/

[72] FreeMarker. 2008. http://www.freemarker.org/

[73] XSLT. 2008. http://www.w3.org/TR/xslt

[74] Apache POI. 2008. http://poi.apache.org/

[75] iText. 2008. http://www.lowagie.com/iText/

[76] JasperReports. 2008. http://www.jasperforge.org/jaspersoft/opensource/business_intelligence/jasperreports/

[77] JavaServer Faces. 2008. http://java.sun.com/javaee/javaserverfaces/

[78] Apache Struts. 2008. http://struts.apache.org/

[79] WebWork. 2008. http://www.opensymphony.com/webwork/

[80] Apache Tapestry. 2008. http://tapestry.apache.org/

Page 140: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 140 of 141

WP1 Partners Public 29/08/2008

[81] Spring in Action, 2nd edition. Craig Walls. Manning Publications. Web Extras. http://www.manning.com/walls3/WebXtras.pdf

[82] Java. Sun Microsystems. 2008. http://java.sun.com/

[83] Java SE. Sun Microsystems. 2008. http://java.sun.com/javase/

[84] Java EE. Sun Microsystems. 2008. http://java.sun.com/javaee/

[85] Java ME. Sun Microsystems. 2008. http://java.sun.com/javame/

[86] Java Servlets. Sun Microsystems. 2008. http://java.sun.com/products/servlet/

[87] Java Server Pages. Sun Microsystems. 2008. http://java.sun.com/products/jsp/

[88] Java Server Pages Standard Tags Library. Sun Microsystems 2008. http://java.sun.com/products/jsp/jstl/

[89] Pitchfork. SpringSource and Bea Systems. 2008. http://www.springsource.com/web/guest/pitchfork

[90] Java Transaction Service (JTS). Sun Microsystems. 2008. http://java.sun.com/javaee/technologies/jts/index.jsp

[91] JavaMail. Sun Microsystems. 2008. http://java.sun.com/products/javamail/

[92] Castor XML. 2008. http://www.castor.org/xml-framework.html

[93] Apache XMLBeans. 2008. http://xmlbeans.apache.org/

[94] JiBX. 2008. http://jibx.sourceforge.net/

[95] XStream. 2008. http://xstream.codehaus.org/

[96] Apache AXIOM. 2008 http://ws.apache.org/axis2/1_0/OMTutorial.html

[97] SAX. 2008. http://www.saxproject.org/

[98] JDOM. 2008. http://www.jdom.org/

[99] DOM4J. 2008. http://www.dom4j.org/

[100] XOM. 2008. http://www.xom.nu/

[101] Active MQ. 2008. http://activemq.apache.org/

[102] Lingo. 2008. http://lingo.codehaus.org/

[103] Cem Kaner and Walter P. Bond. Software engineering metrics: What do they measure and how do we know? In Proceedings of 10th International Software Metrics Symposium METRICS 2004, 2004.

[104] T.J. McCabe, "A Complexity Measure," IEEE Transactions on Software Engineering, Vol. SE-2, No. 4, October 1976, pp. 243 - 245.

Page 141: Case Studies on Platform Migration and Refactoring

SERIOUS

ITEA 04032

WP1 Deliverable 1.4

Page 141 of 141

WP1 Partners Public 29/08/2008

[105] M.H. Halstead, Elements of Software Science, Elsevier, North-Holland, New York, 1977.

[106] S. Chidamber and C. Kemerer, ―A Metrics Suite for Object-Oriented Design,‖ IEEE Trans. Software Eng., vol. 20, no. 6, pp. 476-493, June 1994.

[107] Robert C. Martin, Object Oriented Design Quality Metrics an Analysis of Dependencies, http://www.objectmentor.com/resources/articles/oodmetrc.pdf, 2004-06-04.

[108] Brito e Abreu F. and Carapuça R. ―Object-Oriented Software Engineering: Measuring and controlling the development process‖. 4th Int Conference on Software Quality, Mc Lean, Va, USA. 1994.

[109] Bieman, James M. & Kang, Byung-Kyoo. Cohesion and reuse in an object-oriented system. Proceedings of the 1995 Symposium on Software. Pages: 259 - 262. ISSN: 0163-5948. ACM Press New York, 1995.

[110] Henderson-Sellers, B., Object-oriented metrics: measures of complexity, Prentice-Hall, pp.142-147, 1996.

[111] Briand, L. C., Daly, J. W., and Wüst, J., "A Unified Framework for Cohesion Measurement in Object-Oriented Systems", Empirical Software Engineering, vol. 3, no. 1, 1998, pp. 65-117.