48
NSIT DATA CENTER & MIDDLEWARE TASK FORCE Report of the Task Force Feb. 28, 2007 THE UNIVERSITY OF CHICAGO NSIT Networking Services & Information Technologies

NSIT DATA CENTER

Embed Size (px)

Citation preview

NSIT DATA CENTER &MIDDLEWARE TASK FORCE

Report of the Task ForceFeb. 28, 2007

THE UNIVERSITY OF CHICAGONSIT Networking Services & Information Technologies

Contents

___________________________________________________Membership & Participants 3___________________________________________________________Scope & Purpose 5

___________________________________________________________________Process 7___________________________________________Understanding Data Center Services 9

.............................................................................................Staff 9...................................................................................Resources 9

.....................................................................Services Provided 10

__________________________________________________Campus Application Issues 11.............................................................Administrative Systems 11

.....................................................................Facilities Services 12...............................Financial & Human Resource Applications 12

.......................................................................................Library 13............................................................Academic Technologies 13

...............................................................NSIT Senior Directors 15

_________________________________________________________Middleware Issues 17..........................................................Middleware Perspectives 17

................................................Different Needs, Different Skills 18

_______________________________________________________Learning from Others 25............................................................Interview Questionnaire 25

....................................................................Harvard University 27......................................................................Purdue University 29

...................................................................University of Illinois 30........................................................University of Pennsylvania 31

_________________________________________________________Recommendations 33.........................................................................................Vision 33

....................................................Summary Recommendations 33................................................................Additional Comments 34

________________________Appendix: Application Subcommittee Interview Questions 35_________________________________________________________________Glossary 37

________________________________________________________________References 45

1

2

Membership & Participants

SponsorGregory A. Jackson, VP and Chief Information Officer

Task ForceRich Breen, Sr. Director, NSIT Data Center ServicesMike Fary, Assoc. Director, NSIT Renewal Projects & ArchitectureEd Jakubas, Director, NSIT Open Systems Engineering & ArchitectureChad J. Kainz, Sr. Director, NSIT Academic Technologies (Task Force Co-Chair)Conor McGrath, Manager, NSIT Network SecurityFrances McNamara, Director, Integrated Library Systems and Administrative & Desktop Systems, LibrarySandy Senti, Asst. Dean and Exec. Director, Biological Sciences Information Services (Task Force Co-Chair)Rob Speer, Assoc. Director, NSIT Network Information Systems & Facilities ITMike Willey, Director, NSIT Technology Assessment & Resource Planning

ParticipantsMary Anton, Cornelia Bailey, Bill Barkley, Bob Bartlett, Tom Barton, Charles Blair, Juris Bocek, Blair Christensen, Chris Comerford, Mike Dublak, Chris Flesher, Shelly Fried, Ernie Froemel, Dan Fuhrman, Kaylea Hascall, Alex Henson, Jim Hobein, Ken Hopper, Gregory A. Jackson, Bob Johnson, Brian Knapp, Ron King, Arin Komins, Jim Lichtenstein, Na-talie Linden, Rich Marshall, John Mohr, Mari Nakatsuka, Cung Nguyen, Bob Reagan, Helen Seren, Alan Takaoka, Ron "ielen, David "omas, David Trevvett, "erese Allen-Vassar, Bob Vonderohe, Matt Werner, Bruce Williams, Scott Wil-son, Elena Yatzeck,

VendorsOracle, SAP, SunGard SCT, and IBM.

InstitutionsHarvard University: Joe Bruno, Director, Network Server Systems, and Erica Cahill, Manager, Program Management Office; Purdue University: Brett Coryell, Interim AVP, IT Infrastructure; University of Illinois Urbana-Champaign: Mike Lyon, Interim Assistant Vice President, Computer Operations, and Jason Heimbaugh, Interim Director, Computer Op-erations Applications and Infrastructure; University of Pennsylvania: Ray Davis, Executive Director, Systems Engineering & Operations, and Donna Manley, Director, Data Center Operations.

3

4

Scope & Purpose

"e Data Center & Middleware Task Force was charged with providing guidance and specific recommendations to NSIT and the VP/CIO toward an appropriate long-term strategy for the NSIT Data Center and related middleware activities and services.

"e Task Force examined the existing Data Center Services operation, established and emerging middleware activities across NSIT, and the state of current and anticipated centralized data center and middleware needs for campus. In addi-tion, the Task Force actively explored how other IT organizations, both inside and outside of higher education, addressed related data center and middleware issues, and such investigation included organizational structures, scopes of services, support and management issues, resource allocation and renewal, staffing and budgetary models, etc.

Deliverables"e report of the Task Force details the nature and scope of the next evolution of the NSIT Data Center, its staffing, and services (hereafter referred to as DCnext), and include:s

1. An overview of the current state of Data Center Services that includes a summary of resources, technologies, services, and staff;

2. A summary of applications hosted and systems supported by Data Center Services including ones currently in production, and those planned for replacement and/or new deployment through 2010;

3. A high-level assessment of institutional middleware needs that identifies a subset that are most appropriate for inclusion within DCnext;

4. A proposal for a suite of DCnext services that best matches available resources and planned growth, addresses anticipated application and system evolution, and is most appropriate for meeting the identified subset of cur-rent and emerging middleware needs;

5. Recommendations for establishing, maintaining, and evolving technology and data standards, common data interfaces and bindings, and the overall architecture of DCnext;

6. Recommendations for the management and governance of DCnext that includes formal participation of stake-holders;

7. Models for maintaining and renewing the services and resources of DCnext including both immediate needs and long-term sustainability;

8. A position definition that outlines the job requirements for the role of Senior Director of DCnext;9. A mission and vision statement.

5

6

Process

A data center is often thought of as being a stable, evolutionary unit within an IT organization. "e challenge for the Data Center & Middleware Task Force is to look forward while remaining cognizant of what needs to be maintained for business and process continuity. Balancing these perspectives and tackling the charge in one swath is a daunting task, therefore the process shall be divided into four manageable parts, three of which come together to inform the final piece.

"e Task Force must have a reasonable understanding of the current state of services and future needs. "erefore, the group conducted a review of Data Center Services, application areas, and middleware to gather information on what the NSIT Data Center is and who it serves. "e group divided into subcommittees, each mapping to specific areas of in-quiry. Reviewers were encouraged to add outside experts to the subcommittees who could provide in-depth detail and valuable input as appropriate. In addition, the subcommittees conducted interviews of clients, members of other units, and staff as needed in order to develop a fuller understanding of the topic.

Based on materials gathered through the review process, the Task Force identified specific areas to explore that provided the framework for the five categories of recommendations. Using the information gathered in the identification step and comparing it to the materials from the review, the Task Force assessed the current data center, staffing, and services to see if current and anticipated needs can met.

Recommendations are based on the information gathered in the previous steps. "e Task Force believes its recommenda-tions are both forward-looking and appropriate for maintaining process continuity, and are not predicated on the status quo nor considered unreasonable and unrealistic.

All DCMTF material was hosted as an organization on http://chalk.uchicago.edu/.

7

8

Understanding Data Center Services

To better understand the scope of Data Center Services (DCS), an open discussion was held that covered four areas: peo-ple, organization, resources and services. "e focus was not on numbers and statistics, but rather the functions and capa-bilities of DCS. "e subcommittee included Chad Kainz, Mike Willey, Rob Speer and Rich Breen with additional input from Jim Lichtenstein, Helen Seren, Jim Hobein, Rich Marshall, Ron "ielen, and Chris Comerford.

StaffData Center Services is organized into four major groups: Computer Operations, Production Services, Mainframe Sup-port and Open Systems Engineering and Architecture. "ese groups work together to provide a secure raised floor envi-ronment in which services such as production, and test system processing, production automation, secure data transfer, simple data transformation, operating system installation and maintenance, ISV (Independent Software Vendor) software installation and maintenance, production and environmental monitoring, and enterprise storage services (online and offline storage, backups, etc.) are supported.

Mainframe Computer OperationsOperators provide 24x7 monitoring of mainframe production processes. "ey initiate production processing according to documented schedules, respond to production problems using documented escalation procedures as well as monitor-ing the physical environment and responding to environmental alarms.

Production ServicesProduction Services Personnel devise and schedule production processes to match business needs using a variety of auto-mation products and techniques. "ey also support most standard encrypted file transfer methods (including PGP, SSH, SSL) and simple data transformation services to accommodate data to different operating environments.

Mainframe Group"e mainframe group insures that the mainframe hardware and all its peripherals are operational. "ey also install and maintain patched current versions of the z/OS operation system and a large variety of ISV software from several vendors.

Open Systems Engineering and ArchitectureOpen Systems Engineering and Architecture handles the installation, configuration and patching of non-mainframe op-erating systems: Windows, AIX, Solaris, Linux, etc. "ey are also involved in the installation of service software such as SSH daemons, and collaborate in the installation of application software such as Database Servers. "ey also administer the SAN (Storage Area Network), and the application software on such utility systems as the Virtual Tape Appliance and the Tivoli Storage Management system.

Resources"e Data Center is responsible for a number of resources ranging from the physical environment of the raised floor to the machinery there resident and the operating system and other software which supports the infrastructure for adminis-trative computing. Some of these resources are listed here.

MainframeAn IBM z800 Mainframe computer is installed running z/OS. "e mainframe provides 24x7 service.

9

Open Systems"e Data Center houses over 300 non-mainframe systems, maintaining over 200 of these as well as a variety of control-lers, appliances and a mid-range storage facility (see below). An increasingly large share of the open systems deliver serv-ices that must operate 24x7.

CICS Transaction Service"is service provides online, real-time query and execute services for mainframe based applications.

Storage"e Data Center provides a SAN (~ 70 Terabytes), a tape robot (~ 150 Terabytes) and a cartridge tape library (~ 18000 3480 tapes).

Printing"e Data Center operates two mid-range enterprise-class high-volume Xerox laser printers. "ese printers are used to print reports, checks, timecards, ledgers, statements, etc. for the institution at a rate of approximately 700,000 pages per month.

Services Provided"e Data Center provides traditional data center services to a wide range of clients that include (but are not limited to) Payroll, Financial Accounting, Accounts Payable, Registrar, Library, Development & Alumni Relations, Facilities Serv-ices, University Research Administration, and other units within NSIT including Academic Technologies, Administrative Systems, General Services, and Data Networking. "e following is a list of some of the services provided by the Data Center:

• Production Automation.• Secure data transport (using standard encryption technologies).• Simple data transformation.• Production monitoring.• Environmental monitoring.• Backup services for object restore (dataset or database).• Offsite backups for disaster recovery.• Server hosting and management.• A secure raised floor, climate controlled, computing environment.• Hardware and software contracts and maintenance.• Expertise in support of hardware and software systems procurement.

"ere is no single funding model that applies to all of the services and resources of DCS. Funding comes from the core university budget and allocations from capital projects, but also takes the form of staffing arrangements, limited recharge, technology end-of-life refresh charges, and other custom and tailored methods, Many of the services are delivered through individual person-to-person or group-to-group arrangements that typically do not take the form of service level agreements.

10

Campus Application Issues

"e subcommittee on Hosted Applications & Supported Systems, which included Sandy Senti, Ed Jakubas, and Francis McNamara, conducted a number of interviews with a set of clients who depend on DCS for a range of services and re-sources. "e results of these interviews cluster together as sets of campus application issues that are summarized below.

Administrative Systems"e replacement and renewal of administrative systems is guided by the Administrative Systems Council (ASC) that has as one of its major tasks the responsibility for defining and periodically updating the timeline for new systems. "e cur-rent timeline sets 2015 as the year that all systems will have migrated off the mainframe and on to different technologies. Within this 10-year window, administrative systems renewal will both provide an opportunity to selectively introduce new technologies as well as require a number of new approaches to meet service needs. In the interim, legacy systems will need to be supported on a reasonable range of client platforms with minimal service interruption, thus creating service maintenance, migration, and delivery challenges until well after the overall transition is complete.

Looking forward, the range to technologies and services related to administrative systems that are beyond specific appli-cations slated for renewal includes:

• Robust terminal server capabilities to help address service delivery to MacOS workstations, but perhaps to also address security concerns for particular applications and types of data;

• IT Governance systems, processes and procedures that include project and application portfolio management, change and configuration management, and other areas such as trouble-ticketing, performance and applications monitoring, etc.;

• More comprehensive, cross-platform configuration management tools and processes, hopefully encompassing not just software and related documentation, but also hardware;

• Service Oriented Architecture (SOA) and all of the infrastructure alphabet soup that SOA relies on – UDDI, SOAP, XML, etc. Since new core financial systems are not scheduled to be up and running until around 2014, it may make sense to expose some of the services in today’s FAS and APS through SOA, to facilitate more dynamic interaction with new and existing applications, as many application vendors are moving to this architecture. "is could facilitate the interaction of a new HR/Payroll system with the current FAS system, the APS system with an ASP Expense Management System, and/or the Time and Attendance System;

• Expanded set of business intelligence tools and infrastructure such as dashboards, schedulers, a comprehensive metadata repository that serves multiple tools and systems, more robust Extract Transform and Load (ETL) tools, etc.;

• University-wide portal, in particular one that can address both student and administrative staff needs, along with an appropriate portal strategy;

• Rich collaboration tools and environments to support project work and cross-group support teams.• Broadly defined content management infrastructure and processes that could collectively address document man-

agement needs for Development and Alumni Relations, Bursar, College Admissions, Payroll, UHRM, and institu-tional review boards, as well as service and process content such as system documentation and training materials;

• Electronic report distribution;• Service management tools that check to see whether a particular service is actually available to end users and is

responding, and that flag problems and send notifications;• Testing tools that could include batch load testing, online performance testing, running multiple scripts to verify

correct execution, etc.;

11

• Vastly improved trouble ticketing systems that provide tickets and alerts across groups (e.g., so when a financial system problem is reported to the NSIT help desk and it’s routed to the Comptrollers Office for resolution, the Comptrollers Office can directly record that it’s been dealt with.);

• Centralized authority management infrastructure that links with next generation HR applications.

Facilities ServicesFacilities Services depends on a team of NSIT-managed applications support staff who are on site at the Young Building, as well as server management that is provided by DCS. Elements of Facilities Services’ applications are unique to the unit (e.g. energy management and CAD systems), whereas others are similar but have unique challenges (large-scale docu-ment sharing and collaboration on multi-year building projects that encompass commercial contractors and non-educational users, as well as trouble ticketing, work order processing, digital media, etc.). "ese systems and processes extend over and above the application set that is common to most administrative areas (desktop applications, financial systems, etc.). Challenges facing Facilities Services include:

• Larger and more complicated CAD and graphic files and databases that must be stored, shared, archived, and retrieved. Complicating matters, graphics and CAD files are storing more than pictorial information – they are including extensive data and metadata and often integrate databases and other extended file and data types;

• Upgrade FAMIS space management system from client/server to web-based application (proprietary software in Oracle environment);

• Upgrade AutoCAD software and potentially add 3D modeling and CIS capacity;• Expanded use of digital media software such Adobe Creative Suite and web tools;• Document management;• Project management and collaboration either as an in-house service or as an ASP/hosted service;• wider access to Facilities Services’ data by the University community.

Looking forward, Facilities Services is unclear about what is possible and how to proceed regarding a number of the above challenges, especially within the context of Data Center Services as DCS appears to lack:

1. definition as to its scope of service,2. a catalog of services,3. SLAs and in particular, service and support commitments,4. storage, backup and archiving strategies that makes sense to clients, and5. coherent client-focused consulting services.

Financial & Human Resource ApplicationsWithin NSIT Administrative Systems are the Financial & Human Resource Applications (FHRA) groups which depend on DCS for operating system support, file transfers, printing from the mainframe, central file server support, and other related IT-mediated data management processes. "ey use the mainframe for applications but also use other servers for various related systems.

A number of themes emerged out of the discussions with the FHRA groups. "ese include:

• Need for change management systems;• Increased storage requirements both on the mainframe and SAN;• Migration toward servers supported by DCS or ASP vendors and away from locally-supported systems;• Desire for greater access to shared database services;• Exploration of n-tier systems that abstract the data, business logic, and presentation/client layers;• Support for offline storage and file transfer;• Disaster recovery and business continuity planning;• Tools for business intelligence including metrics and analysis

On the horizon, the FHRA groups are looking to install, update, or improve the following:

• Tuition Assistance Database that will include Active Directory authentication, automatic data feeds to and from the Accounts Payable module on the mainframe, and Web-based tuition application forms;

12

• Custom UHRM databases by migrating them from FileMaker Pro to centrally maintained servers and databases;• HRMS (Integral Personnel & Payroll system) that includes a number of updates and changes to enhance capabil-

ity and reflect current business practice;• Pension Systems in FE58;• FOCUS by Information Builder to the Web-based iFOCUS;• Benefits Management Systems (BMS);• ATS system that takes into account the new qualified transportation process;• Time and attendance system;• TRACS and IRBWise by moving them onto new systems and enhancing file storage capabilities;• Imaging System by exploring a future release that incorporates .NET;• form printing through HFDL;• expense management ASP.

LibraryHoused in the Regenstein Library, the Library's IT groups consist of programming and development teams, and a sig-nificant desktop and public computing operation that supports all of the campus libraries and library staff. Aside from desktop support, Library IT creates, manages, maintains and enables access to the library catalog, licensed digital re-sources, and other academic content. In addition, it collaborates with faculty, researchers and libraries locally, regionally, nationally, and globally on developing and hosting digital collections and similar content-related scholarly projects.

Integrated Library Systems (ILS) manages and maintains the main library system, Dynix1. In this capacity, ILS is a client of DCS as it depends on the Data Center for supporting the hardware, operating systems, backup and restoration (via TSM), and storage (SAN) of the library system and catalog data; the library system itself is administered and supported by ILS.

"e Digital Library Development Center (DLDC) creates and supports digital collections. "e DLDC manages, sup-ports, and maintains its own systems and applications. It uses the Data Center's TSM backup and archiving facility. It is a participant in the joint Library/NSIT pilot project to provide limited archiving capabilities for a narrowly-defined sub-set of data. "is particular project is an outgrowth of the Digital Archiving Task Force, and represents a combined effort to better understand the issues and needs surrounding institutional archiving — an effort that crosses both NSIT and Library domains. It is expected that this activity will make use of the Data Center's TSM archiving facility and the SAN.

ILS turns to DCS for central services including authentication, SAN storage, Oracle database management, and disaster recovery. Looking forward, ILS and DLDC would like to see a clearly defined backup service bound by an SLA and a clearly defined disaster recovery plan. ILS would like to see Oracle database services that include a reasonable and attrac-tive pricing structure. In addition, ILS is moving the library system to an n-tier application using various elements of middleware. "e scope and range of middleware options that could be potentially provided by NSIT is of both interest and concern.

Academic TechnologiesNearly four years ago, Academic Technologies was asked by the Board of Computing Activities & Services to explore ways to enhance IT support for research. At approximately the same time, the incoming proposals to the Provost’s Pro-gram for Academic Technology Innovation2 indicated a shift in IT funding requirements as more proposals requested funds for software development, off-the-shelf software, and servers to host academic applications. Finally, faculty meet-ings3 during the 2004/05 academic year revealed a growing desire to create reusable and sharable learning objects, both as

13

1 http://www.sirsidynix.com/

2 Commonly known as ATI, the program was started in the mid-1990s to provided technology seed funding to University of Chicago senior lecturers, assistant and associate professors, and professors. Award recommendations are made by the Board of Computing Activities & Services to the Provost. The program is administered by NSIT through Academic Technologies. http://ati.uchicago.edu

3 Meetings included the Council on Teaching, College Council, Center for the Study of Languages Advisory Committee, and the Library Board as well as numerous individual discussions with faculty and lecturers.

individual content elements that would be included within the learning management system 4as well as self-contained scholarly websites of shared content. "ese three trends coupled with the rapid expansion of media classrooms5 and forthcoming addition of USITE 61 within the new south campus residence hall in 2008, is leading to a major shift in academic IT systems and infrastructure needs.

Academic Technologies sees itself as an aggregator of NSIT services. For example, all public-facing services such as USITE printing services and the WebStations along with the bulk of internal IT systems depend on centralized identifi-cation and authorization services and in the case of USITE printing and the WebStations, depend on DCS for systems administration. Chalk, the campus learning management system administered by DCS but managed by Academic Tech-nologies, adds an additional layer of dependency by obtaining all of its course-related data from the student system. Non-course sites such as departmental organizations and committees are currently managed through a manual process, but the hope is to use Grouper as the “Registrar.” "e intent, aside from efficiency, is to enhance the value of IT services by ex-panding upon existing centralized infrastructure when possible.

For the last five years, Academic Technologies has been stabilizing its infrastructure and services, and has positioned itself for a period of rapid innovation that will continue well through the 2008/09 academic year. Areas of exploration include:

• Application virtualization of USITE software by 2007 using technologies such as Citrix or Tarantella piloted on the grant-funded DCS-managed IBM BladeCenter backed by the DCS SAN;

• Development of WebStation 2.0, a next-generation thin client solution for WebStation clusters and media class-rooms;

• Coordinated upgrade of the Blackboard Learning System in the summer of 2008, possibly shifting from Solaris to Linux as the underlying hardware will be due for replacement;

• Exploration of Sakai6 as a potential research or organizational collaboration environment as well as an extended learning management environment beyond Blackboard;

• Participating in the design, development, and deployment of the campus uPortal infrastructure and actively par-ticipating in the uPortal open source project;

• Creating internal services that depend on uPortal as the delivery platform and user interface, and are based on SOA and “Web 2.0” for aggregation and integration;

• Launching the Scholarly Projects Infrastructure Initiative, a technology-oriented services that includes:• Distributed computing for research and teaching that:

1. uses Condor on x86-based USITE machines in a cycle-harvesting model;2. contains dedicated x86 and SPARC clusters that utilizes end-of-life production servers in the four to five

year extended hardware life span period and assumes a no-cost “throw away” model regarding hardware maintenance;

3. is harmonized with Grid-based infrastructure and services with Argonne National Laboratory and is de-veloped in collaboration with Argonne and the Computation Institute.

• Server consolidation on Opteron-based hardware through virtualization;• Infrastructure distribution throughout the campus network to minimize traffic and to explore distributed

infrastructure management issues;• Scholarly Web-based hosting services that are research, academic, or course-specific and do not fall within the

domain of administrative, departmental, or enterprise services and systems;• Programming and application development support for scholarly projects.

Specific to the Data Center, Academic Technologies receives very good support from individuals in DCS, but would like to see this more systematized so that there is equal opportunity of quality support across DCS. From a technology per-spective, Academic Technologies has purchased SAN space but has not yet taken advantage of it. "ey would like to tie into the DCS SAN for a larger number of its production and pseudo-production services, and anticipate doing so in the coming months. Chalk is currently not on the DCS SAN and depends on a recently purchased Sun SAN as an interim

14

4 Chalk, the campus learning management system as of 2006, is based on the Blackboard Learning Suite v6. http://www.blackboard.com/

5 During the first five years of media classrooms (1996-2001), the pool of classrooms grew from 19 to 24. From 2002-2006, the number jumped to approximately 60.

6 http://www.sakaiproject.org

solution. "is solution was recommended as an interim option that may extend only through the planned major upgrade of Chalk in 2008. TSM is not used today because too much data needs to be backed up (although some archived data does get sent to TSM). As an alternative, Academic Technologies’ relies on mirrored RAID arrays for redundancy.

On the horizon, Academic Technologies is concerned about academic needs centered on:

• the explosion of digital media storage and bandwidths needs culminating with the proposed Center for Creative Arts, but highlighted with the opening of USITE 61

• growth of data sets, and in particular BSD cross-correlating data sets• shared data services across disciplines and academic domains• scholarly data backup and archival needs for teaching, learning, and research• inter-institutional collaboration• developing initiatives related to cyberinfrastructure

NSIT Senior Directors"e senior directors voiced a need to present a better face to the clients as well as formalize processes within the Data Center. "ere is a need to provide services with capability, predictability, stability, affordability. Processes necessary to do this include change management, monitoring , notification, configuration management and the definition of a long term architecture. On the client side, they need to develop a more collaborative approach and structure services so that they are more easy to use and understand. Hard and fast rules are not the way to go; rules need to benefit University of Chi-cago, not NSIT. One approach would be to create an advisory board for machine rooms at the university to help provide collective management of data centers.

15

16

Middleware Issues

"e subcommittee on Middleware Needs consisting of Rich Breen, Conor McGrath and Mike Fary was charged with looking at how middleware is currently used at the University, how it could potentially be used in the future, what func-tions it supports and should support, and how the organization should be structured to administer the technologies.

"e subcommittee met with a cross section of NSIT, software providers, and selected peer institutions as well as explored industry research to obtain insight and recommendations. Following this fact finding effort, an intense analysis was car-ried out to outline what the current and future state of middleware would be for the University, and how NSIT should be organized, staffed, and educated to most effectively put in place and manage those technologies and processes. "e result of that analysis, along with recommendations, are provided in this report.

Middleware PerspectivesAside from conversations and fact finding meetings, additional research was conducted with industry pundits familiar with these types of technologies. "e groups represented in the conversations were from the following areas:

• NSIT: Application Management, Office of the CIO, Infrastructure & Integration, Security, Web Services, Senior Directorate, Windows System Administration, Unix System Administration, Mainframe Administration, and Network Administration

• Vendors: Oracle, Sungard SCT, IBM, and SAP• Peer Institutions: University of Illinois, Purdue University, University of Pennsylvania, and Harvard University

In addition, the reporting services of the Gartner Group were used by the middleware subcommittee to better under-stand broader industry trends in middleware. Based on those discussions and outside research, the Task Force defines middleware as:

Middleware is a set of service components which are usable by many applications or other services. !ese reusable component services have uses beyond a single application.

How it breaks down "e distillation of the conversations and analysis that was performed resulted in what we believe to be a snapshot of the categories of middleware that can be reasonably expected to be needed, taking both a short term and a longer term view. "e categories that were identified are as follows:

SHORT TERM

1. Data transport mechanism, in the form of file transfers2. Authentication (in the form of LDAP and active directory)3. E-commerce including EDI and electronic funds transfer4. Biztalk5. Application server management for technologies including WebLogic, jBoss, Apache Tomcat 6. Common understanding of data context between applications7. Web server management for technologies including Microsoft IIS, Apache HTTP server8. OpenLink (or other database connection technologies such as Sybase EnterpriseConnect Data Access) 9. Database environment management

LONGER TERM

17

1. Data transport mechanisms (this requirement expands to include more real time data movement techniques). "ese include:• file transfers• messaging (message oriented middleware and/or asynchronous messaging) • publish/subscribe techniques• integration brokers

2. Data administration/data management• data definition and standards• common understanding of data context between applications

3. Authentication in the form of LDAP and Active Directory, and possibly others such as OID and Shibboleth4. Authorization, in particular enterprise class-based authorization as opposed to the individual approach used

today5. E-commerce including EDI and electronic funds transfer6. Biztalk7. Application server management including WebLogic, WebSphere application server, jBoss, Apache Tomcat, and

future application server-like technologies8. Web server management including Microsoft IIS and Apache HTTP server9. Identity management including OneDirectory and MCDB10. OpenLink (or its potential replacement) that includes direct database connection technologies such as Sybase

EnterpriseConnect Data Access, and IBI’s iWay11. Configuration management12. Database environment management

Different Needs, Different SkillsAfter interviewing a number of individuals and group across the IT landscape of campus, the subcommittee applied a model of functional decomposition to several of the large categories of middleware that were identified. "e ultimate goal of the exercise was to better understand how the tasks in each of the areas should be grouped by reducing them to their fundamental functions, and then placing those tasks and responsibilities in the place in the organization where the maximum efficiency could be attained. "is process should help to identify the organizational model that would best support the technologies, and also identify staffing and skill gaps that need to be addressed.

Functional Decomposition of ActivitiesFunctional decomposition is used to describe the technique for breaking down a task into a number of smaller and sim-pler sub-tasks. It has been used for years in the analysis and design of large computer applications. "e goal of functional decomposition is to reduce a system step-by-step to a collection of irreducible fundamental functions, starting at the topmost level of functionality.

Applying a Yourdon technique, the process occurs in four stages: current physical, current logical, new logical, and finally new physical. What the subcommittee attempted to do was map out the current physical implementation, remove the physical boundaries to arrive at the current functional or logical view of the activities, move to the preferred functional scheme, and finally impose the physical constraints on the logical view to fit into the planned environment.

MAXIMO EXAMPLE"e Data Transport example can be described as follows, using a scenario of moving a file of account data from the FAS system to the Maximo facilities management system. "is particular process involves a number of groups including NSIT Data Center Services, Facilities Services, Facilities IT, Comptroller’s Office, and Financial Systems.

18

Current Physical"e high level activities are depicted along the Y axis. "e high level areas responsible for those activities are shown along the X axis. An X in a column indicates the intersection of the activities with the responsible area.

Activities Data Center ServicesComptroller,

Financial SystemsFacilities Services,

Facilities ITJob scheduling and error notification XInstallation of transport software XSource data definition XSource data extraction XData transformation and loading XAuthentication and authorization X

Current Logical"e activities from the physical model are now broken down into lower level activities, with a higher degree of granular-ity. "ey are grouped into rows, corresponding to the high level activities on the current physical model. Across the top, functional work areas have been defined. Each of the activities is then assigned an Owner and Implementors. Each activ-ity can have only one Owner, but can have one or multiple Implementors. "e rows are arranged in horizontal bands to attempt to group the Ownership together.

Grouping ActivityHardware

mgmt OS mgmt

Database software

mgmt

Database schema mgmt

Data interface definition

API definition

Process owner

Process implementor

Data owner Data user

Data Center Services

Timing definition O IError notification for data movement

O I

Error correction for data movement

O I

Facilities Services, Facilities IT

Definition of data targets

I I O

Definition of data transformation

I I O

Implementation of data transformation

O I I

Implementation of data targets (schema level)

O I I I

Comptroller, Financial Systems

Definition of data source(s)

O I

Data policy definition O IData policy enforcement

I O

Implementation of source data extraction

I I O

Facilities Services, Facilities IT

Authentication O IAuthorization O I

Data Center Services

Installation of client-side transport software

O I

Installation of server-side transport software

O I

Process: moving account data from Maximo to FAS. Process owner: NSIT DCS Prod Shop. Process implementor: Facilities IT. Data: FAS account data. Data owner: Comptroller

19

New LogicalBy analyzing the ownership groupings, functional work areas begin to emerge. "ese areas are dominated by the owner-ship of the tasks. Selecting these groupings highlights tasks which could be brought together into a functional work-group. "e original logical banding remains for clarity.

ActivityHardware

mgmt OS mgmt

Database software

mgmt

Database schema mgmt

Data interface definition

API definition

Process owner

Process implementor

Data owner Data user

Timing definition O IError notification for data movement

O I

Error correction for data movement

O I

Definition of data targets(see [A] below)

I I O

Definition of data transformation

I I O

Implementation of data transformation

O I I

Implementation of data targets (schema level)

O I I I

Definition of data source(s)

I I O

Data policy definition OData policy enforcement

I O

Implementation of source data extraction

O I

Authentication O IAuthorization O IInstallation of client-side transport software

O I

Installation of server-side transport software

O I

Process: moving account data from Maximo to FAS. Process owner: NSIT DCS Prod Shop. Process implementor: Facilities IT. Data: FAS account data. Data owner: Comptroller.

[A]: This grouping includes the users of the newly transported data, those who will be implementing the final destina-tion of data, as well as those who will be implementing the transformation process. This may include the project team or a group who is dedicated to data movement and the nuances of that technology.

20

System Management

Database Administration Middleware

Management

Identity Management

Process Execution

Data Management

New Physical"e tasks which were grouped together into workgroups can be located into existing or newly created organizational structures. "e shaded areas within the table correspond to those identified in the New Logical step. For visual consis-tency, the ordering of activities is the same as the previous two steps.

ActivitySystems

ManagementDatabase

GroupData Transport & Integration

Identity Management ProdShop

Various Other IT Groups

Data Management

Timing definition O IError notification for data movement

O I

Error correction for data movement

O I

Definition of data targets

I I O

Definition of data transformation

I I O

Implementation of data transformation

O I I

Implementation of data targets (schema level)

O I I I

Definition of data source(s)

I I O

Data policy definition OData policy enforcement

I O

Implementation of source data extraction

O I

Authentication O IAuthorization O IInstallation of client-side transport software

O I

Installation of server-side transport software

O I

Skills & Competencies for Middleware ManagementAs reflected in the functional decomposition exercise, effective middleware management requires a breadth of skills that extend beyond the traditional realm of data management. "e Task Force identified ten (10) key skill and competency areas that should be considered when developing a middleware strategy:

• Technology-Specific Skills: Skills and competencies that a specific technology or tool set may require. For exam-ple, management of WebSphere will require training in the specific language, environment, and tools of the prod-uct. "is would be acquired by attending training classes offered by IBM or suitably certified vendors.

• Secure Transaction Principles: Skills related to understanding, developing, and maintaining transactions which flow through a SOA or Web Services architecture. "ese include not only technologies like SSL and encryption, but also how to package transactions, insure that the proper state is maintained, and allowing for recovery in the case of lost transactions.

• Quality Assurance/Testing: Skills related to the development and execution of test plans, test scripts, regression testing, and system testing.

• Business Modeling: Skills related to the understanding and development of process models, human interface models, and business flow.

• Data Modeling: Skills related to the understanding and development of conceptual and physical data models, class diagrams, and data movement models.

• System Management: Skills related to managing servers, environments, and similar technologies.• Metadata Management: Skills related to the understanding, development, and use of metadata. Since metadata

can come in many styles and flavors, different technologies, tools, and disciplines will require many different slants on metadata.

21

• Work Breakdown: Skills related to the understanding and implementation of work breakdown structures, task decomposition, and scope management.

• Scripting: Skills related to the development of scripting or command languages, or techniques which control the execution or flow of managed groups of work.

• Project Management: Skills related to the management of projects or tasks.

At present, the Data Center needs to develop and maintain these skills and competencies in order to support the proc-esses and services it is responsible for, and the near-term slate of new administrative systems. "e table below illustrates what is largely believed to be the current skill and competency requirements within the Data Center:

CURRENT SKILLS AND COMPETENCIES (2006)Technology-

specific skills

Secure transaction principles

Quality assurance

testingBusiness modeling

Data modeling

System mgmt.

Metadata mgmt.

Work breakdow

nScripting Project

mgmt.

Data transport mechanismsFeeds, flows, simple file transfers

X X X X X X X

Authentication (LDAP, Active Directory, OID, Shibboleth)

X X

E-commerce, EDI, electronic funds transfer

X X X X X X X X

Biztalk X X X X X X X X X XApplication server management (WebLogic, WebSphere Application Server), jBoss, Tomcat

X X X X X X X X X X

Web server management (IIS, Apache)

X X X X X X

OpenLink X X XData context understanding X X X

Database administration X X X X

Looking ahead, the Task Force believes that the range of skills needed to support the University’s enterprise middleware infrastructure and related service requirements will grow dramatically, especially in the area of data transport mecha-nisms. Today, data transport is perceived as merely feeds and flows. However, we believe that a more granular view of data transport is required to encompass more than simple file transfers. In particular, we feel that messaging, publish/subscribe models, and integration brokers will become more common as enterprise systems are installed, refreshed and renewed. In addition to deeper skill and competency needs in data transport mechanisms, the Task Force also recognizes the need to separate identity management, authentication, and authorization into three distinct competency areas, as well as developing separate skills in database management and Data Center-wide configuration management.

"e table below illustrates the future steady-state skill and competency requirements of Data Center staff, looking toward the completion of the administrative systems renewal program in 2015. Note the expansion of skill and competency re-quirements in the area of data transport mechanisms.

22

PROJECTED SKILLS AND COMPETENCIES (2015)Technology

-specific skills

Secure transaction principles

Quality assurance

testingBusiness modeling

Data modeling

System mgmt.

Metadata mgmt.

Work breakdown Scripting Project

mgmt.

Data transport mechanismsFile Transfers X X X X X X XMessaging (MOM/Synchron)

X X X X X X X X

Publish/Subscribe X X X X X X

Integration Brokers X X X X X X X X X

Data Definition Standards X X X X

Data Context Between Applications X X X X X

Authentication (LDAP, Active Directory, OID, Shibboleth)

X X

Authorization (Class-based) X X X X X X

E-commerce, EDI, electronic funds transfer

X X X X X X X X

Biztalk X X X X X X X X X XApplication server management (WebLogic, WebSphere Application Server), jBoss, Tomcat

X X X X X X X X X X

Web server management (IIS, Apache)

X X X X X X

OpenLink X X XIdentity management (OneDirectory, MCDB)

X X X X X X X

Configuration management X X X X X X

Database management X X X X X X

23

24

Learning from Others

"e Task Force completed phone interviews with several institutions to get a sense of their data center organizations, the services they provide, how they manage and train their staffs, how they are preparing for challenges ahead, and a raft of other issues that our Data Center is facing. A copy of the Interview Questionnaire was emailed to the institution prior to each call. Although the questionnaire is a rather detailed, not all of the questions were addressed in the actual conversa-tions. "e questionnaire was mainly used as a tool to guide conversation and to establish a context for discussion.

"e responses from Harvard University, Purdue University, University of Illinois Urbana-Champaign, and the University of Pennsylvania follow the questionnaire.

Interview Questionnaire"e Data Center & Middleware Task Force of Networking Services & Information Technologies (NSIT) has been charged by the Vice-President and CIO to provide guidance and specific recommendations toward an appropriate long-term strategy for the NSIT Data Center and related middleware activities and services at the University of Chicago. "e Task Force is examining the current central Data Center Services operation, established and emerging middleware activi-ties across NSIT, and current and anticipated centralized data center and middleware needs of campus. Part of the charge of the Task Force is to learn from others both within higher-education and industry as to what trends, practices, and is-sues are facing data centers including ways of structuring data center operations and management; scopes of services; system, service and user support; resource allocation and renewal; staffing and funding; and change management and governance. Toward this end, we are seeking your input and advice. Attached are a number of questions pertaining to recent discussions within the Task Force and we are looking for your perspective regarding these issues.

DATA CENTER QUESTIONS For the purposes of this survey, the Data Center is defined as a unit within the central IT organization that encompasses both services and responsibility for one or more central IT computer rooms.

1. Where do central IT infrastructure support functions such as computer room space, HVAC, electrical, fire de-tection, etc. planning and maintenance report?

2. In terms of the Data Center, do you have preferred hardware vendors? If so, for what?3. In terms of the Data Center, do you have required hardware vendors? If so, for what?4. If you answered yes to #2 or #3, who manages the activities? Data Center? Other central IT group? An organi-

zation outside of central IT such as Purchasing? An outsourced service? Other?5. Does the Data Center use an Request for Proposal (RFP), Request for Information (RFI), Request for Quota-

tion (RFQ) or similar process for acquisitions?6. If you answered yes to #5, who is responsible for producing the technical portion of the document?7. If you answered yes to #5, who is responsible for producing the business portion of the document?8. Does the Data Center enter into, manage, or require Service Level Agreements (SLAs)?9. If you answered yes to #8, are the SLAs standardized? Standardized with custom-tailored amendments? Cus-

tomized and crafted to match each situation?10. If you answered yes to #8, are SLAs typically created during the project phase prior to production, after the

application/service is in operation but prior to production, at some other predetermined time, whenever it seems appropriate, or after the application/service is in production?

11. If you answered yes to #8, what elements are included in your SLAs? Mean time to failure (MTF)? Mean time to repair (MTR)? Disaster recovery requirements?

25

12. If you answered yes to #8, please describe the process used to create an SLA that involves the Data Center, in-cluding the units that are involved in formulating an SLA, implementation stages, measurement, managing service expectations, and ongoing SLA review, etc.

13. Within the Data Center, what process(es) is (are) generally used to select and configure hardware and software? Annual budget process? Service needs? Data Center capabilities? Established and documented technical stan-dards? Application requirements? External demands? At random/unstructured? What is needed is what we get?

14. Assuming the Data Center manages a central computer room, does the Data Center also provide support for systems housed remotely in other computer rooms/locations? If not, does a different central IT organization provide that service?

15. In terms of computer room space and services, what would best describe your computer room model?a. Single computer room located on campusb. Single computer room located off campusc. Main computer room (primary) and backup (secondary) computer room physically separate, but still lo-

cated on campusd. Main computer room (primary) on campus and backup (secondary) computer room off campuse. Primary computer room as part of a distributed collection of computer rooms on campusf. Primary computer room as part of a distributed collection of computer rooms both on and off campusg. Distributed collection of computer rooms throughout campus, no single room declared as the main or

primary facilityh. Distributed collection of computer rooms both on and off campus, no single room declared as the main or

primary facilityi. Managed computer room(s) within a commercial hosting facility(-ies)j. Other

16. For those servers whose operating system is managed by the Data Center, who is responsible for hardware fire-walls? Software firewalls?

MIDDLEWARE QUESTIONSAt the University of Chicago, the definition of middleware ranges broadly. "ese areas include data transport, scheduling and error notification of said transport; authentication; application servers, web servers, BizTalk; EDI, EFT, and e-commerce; and configuration management.

17. Can you envision what middleware will be required at your institution in the next 5-10 years?18. When you envision a support model for middleware, does that model have the middleware being managed by a

single group, or spread across various functional entities?19. In your vision of middleware support, are those entities constituted in the data center, in application areas, or in

some other combination?20. What skill sets are required of the groups that manage middleware?21. Do those skill sets exist today in your current staff?22. If you answered “no” to #21, are you ramping up existing staff, finding new staff, or doing something else to

obtain and develop those skills?23. Have the vendors that you would potentially consider for future needs or applications discussed middleware

solutions with your institution?24. Does your institution process, store, and manage credit card transactions and related data?25. If you answered “yes” to #24, does your data center have any involvement with the audit process and require-

ments related to such programs as PCI (Payment Card Industry) Data Security Standard, VISA’s CISP, Master-Card’s SDP, American Express’ DSOP, or Discover’s DISC?

APPLICATION HOSTING QUESTIONS26. What new services would you like to see provided by a data center?27. What scalability and capacity planning issues do you foresee in the future?28. What are your business continuity requirements?29. What are your expectations for local servers interfacing with data center services such as backup?30. What new technologies do you see coming that the data center or some other central IT entity will need to

support?

26

Harvard University"e Task Force interviewed Joe Bruno, Director of Network & Server Systems (NSS), and Erica Cahill, Manager of the NSS Program Management Office (PMO), of Harvard University. Harvard’s central data center is managed and sup-ported by Network & Server Systems and consists of systems administrators, database administrators, network opera-tions, data center operations, support services (campus help desk and LAN support), and the Program Management Of-fice. "e scope of NSS is institution-wide and includes services that impact the entire university. "ese services include:

• Network Services• Off-campus connectivity to the commercial Internet and Internet2• 80% – 90% of the internal network infrastructure• Router management that connects to a significant network at the medical school campus and affiliated hospi-

tals• Server Operations Center (SOC)

• Application hosting: A service delivered to 85 organizations that covers most of the central administrative areas, schools, and research institutes. Some customers include the OIS (Office for Information Systems), Human Resources, Library, the Kennedy School, and the Law School.

• Server hosting: Under this agreement, there is no management of the servers, only space is provided to house the systems within the machine room. Clients have the option of choosing from an a la carte menu of addi-tional services which include backup, monitoring, storage, system administration, and production control. "e customers can pick and choose what service they want, and are charged accordingly.

• Database Administration• "e DBA’s manage the physical database environment (instances, schemas, etc.). "ey perform some logical

work, although that is not their primary responsibility.• Support Services

• "e group that provides help desk services. Support Services was originally intended to support the adminis-trative areas only, but in the last three years the scope has expanded to include support to some schools. It supports desktop applications, email, and calendaring.

• Operations Group• "e group once operated Harvard’s HP mainframe. Joe initiated a program titled ICS (Integrated Customer

Support) that was intended to prepare the mainframe operators to move into a different role. Harvard’s new operational environment would be a multi-vendor volume production, spread across several machines. "is transitional program was started in July 2005.

• Phase I will bring the operators’ skills to a different level. "e new skills desired included monitoring, scheduling jobs, and managing backups.

• Phase II will prepare them to become junior system administrators. • Program Management Office

• Aligns customer requirements with service delivery.• Creates and manages SLAs from a portfolio of standard services7.

In terms of disaster recovery and remote sites, Harvard has something they refer to as Site Y. Site Y is rented data center space approximately 90 miles from Boston that houses some key infrastructure elements. "e network connection to Site Y is shared with the University of Massachusetts. Harvard also has a contract with Sungard to provide disaster recovery serves for seven key areas and applications under contract. "ese include:

1. Library2. Financial Management3. Data Warehouse4. Grants Management5. ISITE6. Authentication/directory services7. Alumni Affairs

27

7 http://www.uis.harvard.edu/server_hosting/NSS_Service_Portfolio_vol1_9.27.05.doc

For each of these areas, NSS runs a recovery test every year. It is worth highlighting that only the seven areas are covered by the contract; neither SunGuard nor Site Y are full disaster recovery services or sites for their entire data center.

Turning to monitoring, Harvard currently provides two levels of monitoring that includes basic ping monitoring to in-sure a system “heartbeat,” verifying that the system is up or down, and operating system monitoring that covers CPU, I/O, memory, and system performance. NSS would like to start and push out reporting and trending of what the monitor-ing systems record. Harvard has chosen HP OpenView on their Unix systems, and are moving to MOM monitoring for their Windows-based servers. NSS does not currently provide application layer monitoring.

Network & Server Systems has within its structure a database group that is responsible for a set of middleware that in-cludes Apache, BEA WebLogic, and some minimal Microsoft SQL Server support. "ese three areas do not reflect actual demand, and as such, NSS is looking at including Tomcat and LAMP as part of its standard suite of database and mid-dleware support. Recent trends indicate that open source will be an inevitable part of Harvard’s IT future, therefore NSS wants to expand its support for open source solutions.

Credit card transactions are handled in two ways. First, Harvard developed a credit card gateway for use within their environment. Internal applications interact with the gateway, and the gateway then interacts with a clearinghouse, thus limiting institutional exposure. Second, it does have some applications that process credit card transactions directly and outside of the gateway process. In terms of monitoring and auditing the transactions and processes, Harvard’s Risk Man-agement and Audit Services groups get involved to make sure that the applications are compliant. Of note is that Har-vard retains the log files from these applications for one year.

With regard to firewalls, NSS uses the Cisco services module and is in the process of placing every subnet in the data center behind it. Firewall rules are set on a subnet level and are managed like all other processes. On data center systems, NSS does not currently use or require software firewalls, nor are they considering a move toward software firewalls.

Space is a concern for Harvard as its current central data center is almost full with both centrally managed systems and applications as well as hosted systems and services.

Finally, growth and service projections are completed for each customer and feed into an annual forecast that is com-pleted before the start of fiscal year. Like Chicago, Harvard’s fiscal year starts on July 1. In order to better serve NSS cli-ents and develop an accurate forecast for the central data center, NSS creates and distributes draft SLAs to each of its 85 customers six months prior to the start of the fiscal year (i.e.. January). "is allows each client to review their SLA, make changes, and appropriately plan for any service or funding alterations prior to the start of the fiscal year.

MAJOR THEMES & ITEMS OF NOTESeveral themes and items of note regarding data center operations, support, and management emerged out of the Har-vard interview. "ese include:

• Central operations center provides service to 85 internal organizations;• Server hosting is provided as part of a range of services starting with space and extending through management

and other services;• "ey are proactively moving their mainframe operators to become production control personnel through a man-

aged training program;• Seven key applications have servers at a hot site with a recovery test is run every year to insure that they can fail

over;• Monitoring is provided at two manageable levels;• Database group is responsible for application servers;• Harvard is totally funded by recharge, with no normal operational budget;• Always take advantage of new construction or relocation opportunities;• Being a reactive organization does not equal a successful organization – an organization must become proactive

and responsive to customer needs to survive;• A complete catalog of services that serves as a basis for every SLA is crucial for long-term success and maintaining

positive client relations.

In addition, Joe and Erica recommended that we look at the commercial sector for best practices and simplify, identify quick wins, and focus on a few things at a time.

28

Purdue UniversityBrett Coryell, Interim AVP for IT Infrastructure, spoke with the Task Force regarding data center operations at Purdue. IT Infrastructure is part of the central IT organization that is a part of the overall IT ecosystem that includes 36 depart-ment IT groups across campus.

Purdue maintains two central data centers located a mile apart on the Lafayette campus. Within these locations are both administrative and research computing systems that provide services to campus. Research computing is an area of empha-sis, and as such, Purdue is attempting to centralize research servers, and are providing a hosting facility for academic serv-ers; they also offer application hosting for a price.

"e data center handles procurement through preferred vendors, equipment racking and space management, operating system installation, as well as a 6x24 printing operation, all within a management structure of four directorates: systems, telephony, networks, and operations. Despite having two data centers on campus, Purdue’s ability to fail over to a backup site is limited. "ey are planning on implementing a network ring between Lafayette and the three regional campuses in the near future, which may provide additional capability for redundancy within the regional campus network. In addi-tion, they are putting together a proposal for a new data center that will either be off campus in the building where the ERP project is currently housed, or built right next door to the existing facility. "ey are looking at either a 20,000 or 50,000 square foot building to address current and future data center needs.

"e data center has a service catalog that lists the different systems and services (like email) that they offer. For “paying customers,” they’ve adopted an SLA model that gives clients a variety of choices, depending on cost and business re-quirements. For example, a client has a service choice of 24x7 monitoring and support of systems, or business day/best effort. Depending on budget and business need, the client selects the most appropriate option and that is built into the SLA. SLAs and services in the catalog include servers, databases, backups, etc. In addition to SLAs, Purdue also offers its clients Operational Level Agreements (OLAs). "ese are generally between two parts of the organization, and lay out general practices between the parties.

"e Purdue data center has a range of maintenance windows defined for the various types of server maintenance. In addi-tion to scheduled maintenance windows, they also maintain a critical dates calendar that is used to show important dates where service interruptions could negatively impact business processes (e.g. student registration, financial closing, etc.). "ey have a separate change management calendar to indicate system-related changes. Purdue subscribes to the itSMF group, which offers a framework of best practices for change management. "ey have a homegrown configuration data-base, which shows the relationships between apps, databases, networks, etc., but are considering a change management solution from Hornbill Systems. "ey see this as a less expensive solution versus other options.

Today, Purdue’s systems are a heterogeneous mix of COBOL, Smalltalk, PowerBuilder, and some Java in the COEUS project. "ey are also have ETL applications to move data from their online systems to their warehouse. As eluded to earlier, they are in the midst of an ERP project and have chosen SAP for their ERP system (OnePurdue). Currently, SAP uses a programming environment called ABAP. ABAP is the acronym for Advanced Business Application Programming and is the programming language used for the thousands of tiny embedded programs called transactions that make up the SAP application. "e new generation of SAP modules are moving more to Java, business modeled processing, and XML. Much of this next generation SAP product will depend on messaging through the NetWeaver messaging architec-ture. As for reporting tools, they had planned to move to Cognos for all their reporting needs, but halted that. "ey will now use the SAP end user reporting tools for the ERP application.

MAJOR THEMES & ITEMS OF NOTE• Centralizing research servers. • "ey have service level agreements with paying customers with a variety of options based on cost. • Maintenance windows are defined. • A service catalog lists what they offer. • "ey have several calendars defining change management, important processing dates, etc. • "ey host servers and applications for other organizations.

29

University of Illinois"e Task Force spoke with Mike Lyon, Interim Assistant Vice President for Computer Operations, and Jason Heim-baugh, Interim Director of Computer Operations Applications and Infrastructure. Together, they provided an overview of the University of Illinois data center operation.

Computer Operations houses and manages all administrative systems for the University of Illinois, a system of three campuses that encompasses 70,000 students and 27,000 employees. It currently has a budget of $17.6M that includes 176 staff, of which 77 are in operations, and three data centers located throughout Illinois:

• Champaign/Urbana: this is the offsite backup site that is also used for development, test, and failover. • Chicago: the production data center is located on the UIC campus.• Springfield

In addition to multiple data centers located throughout the state, Illinois uses DataGuard for replication services between Chicago and Champaign/Urbana. "e scope of Computer Operations is purely administrative as each campus has its own academic cluster, email system, etc.

Within the data centers are approximately 70 Solaris servers, 150 Linux servers, and 120 Windows servers. Layered on top of that infrastructure are about 300 application servers, a number that is far too large. "e team is looking into ways to significantly reduce that number through consolidation.

Illinois had been building its own administrative applications, but found them to be unmaintainable over time and as a result, they decided to go with a packaged ERP. Moving to a packaged system was made easier as Illinois decided to not make any modifications to the software. "ey opted to adhere to open standards and use messaging exclusively. As part of their infrastructure, Illinois makes use of an integration broker, thus eliminating the problem of a tangled mess of point-to-point data transfers. "e only way that data is integrated into applications in the data center is through a message bro-ker, utilizing messaging in the OpenEAI format. "is major administrative systems paradigm shift was led by their En-terprise Architecture Group in partnership with SCT.

Aside moving to a SOA-based administrative systems environment, Illinois also created new roles and developed related competencies to enable the infrastructure and services to continue to evolve with business and client needs. For example, they now have an internal EAI group that defines new messages and how they should be used – akin to a standards group. Not all skills and staff resources could be found from within and as such, they hired from the outside for new skills. In the process, Illinois went through a 25 percent reduction in their departmental budget, with a 30 percent staff layoff and that resulted in even more creative thinking about how to install, operate, maintain, and evolve their infra-structure. In the end, they used Accenture for project planning and methodology, and took advantage of SCT as techni-cal consultants.

From a technical perspective, they have separate tools for monitoring their systems. Nagios is used to roll up the numbers from multiple tools to present a consolidated view, and Clarify from Amdocs is used for problem reporting. Illinois is seeing 50 percent annual growth in disk usage, which means a doubling of storage every 21 months. "ey currently have 107 terabytes of SAN storage on the floor that is used primarily for Oracle databases, email, imaging, file storage, and other data. For backup, they use Veritas NetBack. It moves disk to disk. In conjunction with Veritas, they are using a Clarion disk library that goes into the same building, and is also offsite to another campus. Finally, their change man-agement process was internally developed.

MAJOR THEMES & ITEMS OF NOTE• Data movement is through a messaging broker, managed by a central group. • Monitoring is achieved through separate tools, but data is aggregated centrally. • Disk growth is on the order of 50 percent annually. • Application server consolidation is ongoing. • "eir change management process was internally developed.

30

University of PennsylvaniaRay Davis, Executive Director, Systems Engineering and Operations, and Donna Manley, Director, Data Center Opera-tions, discussed the University of Pennsylvania’s central data center operation with members of the Task Force.

First and foremost, Penn’s data center is run as a business and their senior management all come from a business back-ground. Against this backdrop, the organization of approximately 60 people is structured into five groups:

• Operations and production control, a 24x7x365 operation that has roughly half of the overall staff;• Database administration;• Technical services, a group of Unix, Linux, and Windows systems administrator as well as mainframe systems pro-

grammers who are on call 24x7;• Products and strategies group that is responsible for business continuity;• Security administration that includes access profiles as one of its responsibilities

Database administrators and systems administrators are assigned on a primary/secondary basis to ensure there’s always someone capable of backing up another staff member if they are absent or leave the institution.

"e data center itself is a 6,000 sq. ft. machine room with an integrated command center where all systems consoles are consolidated and centralized. "is allows Penn to operate a near lights-out machine room of 300 machines, 95 percent of which are enterprise servers and of that, 70 percent are AIX and the rest Solaris. "ey have a newly purchased SAN which consists of 60TB. "ey have an offsite recovery facility (Sungard) about five miles from their location. As to the management of the physical facility (Penn does not have distributed or multiple data centers), Penn has a Facilities & Quality Engineer. "is person is responsible for everything related to facilities including power, space, and HVAC. "is person is on call at all times. As for hardware acquisitions, Ray and his group do the purchasing of hardware; software is handled by the technical services group. As for firewalls, they have one firewall for the data center managed and main-tained by their systems administration group.

Penn does offer a service to house servers for individuals or groups on campus. "is allows units to take advantage of the security and infrastructure of the data center. When Penn’s data center is approached to house a server, the team makes a recommendation based on currently supported products. If they receive something that they are unfamiliar with, they will send people off to training to get up to speed on the technology. "ey don’t, however, mandate what can be used in the data center. "e Penn hosting environment is currently a caged area that houses about 100 servers. "is is a secure space that they allow groups on campus to bring in their own servers to house them. Hosting clients can pretty much bring in whatever they want, with the stipulation that they cannot put a UPS on the server, and that it must be rack mountable. "ere have been discussions on having multiple data centers under one roof; these would be different “grades” of data center in one building. "is would allow other schools to house their servers in the main data center, and take advantage of the power and security control.

Since 1996, Penn has had remote mirroring of critical systems. "ey use dark fibre to accomplish the mirroring and have put IBM Sharks in both the data center and at Sungard. To ensure that the redundancy is fully functional and operates as expected, they do two offsite tests per year with the goal of recovering critical servers within 12 hours of an outage. "e targets for the mainframe and less critical servers is set to within 36 hours of an outage.

Turning to monitoring, Penn does not currently use a common standard monitoring tool, so as one might expect, they have a number of monitors in place today that vary across organizations and services. "e long-term goal is to have a comprehensive set of monitoring tools that could be housed in the command center that would be more predictive and proactive rather than reflective and reactive. Toward that end, they’ve begun the process of seeking a new core monitoring system and have an RFP out to a number of vendors (IBM, Computer Associates, and BMC Software). Finally, along with monitoring is problem reporting. Penn is currently using Remedy for problem, change, and asset management. In the near future they would like to include approvals in the mix. To load test applications, they’ve chosen Mercury Inter-active as their primary tool.

Penn subscribes to the ITIL standard for change management offered by itSMF. In addition to this process, they have weekly meetings to review changes, after which a list of changes is distributed by email. Service-Level Agreements are a core part of their operation. "ey have developing and using SLAs for about seven years and currently have upwards of 45 agreements in place; every school and large administrative area has at least one or more. "e SLAs are based on a tem-plate from Gartner which was initially overly complicated. Penn simplified the SLA and reduced it to at most, six pages

31

per agreement. A cost proposal is appended to each SLA. A user of a Penn SLAs is any group at the institution, owners of applications or servers, or groups doing application support.

"e move toward a lights-out operation is having an impact on staffing, not the least of which is the need to develop new skills among the current staff. To help with that transition, they have provided their operators with training in MS Office products, JCL, and other technology related areas. "ey would like to transition the operators to become production control staff, who can manage the environment, and deal with level I and II problems, as well as migrate some existing production control staff into system administration roles.

MAJOR THEMES & ITEMS OF NOTE• Run the data center like a business. • "ey have a command center which consolidates consoles and are moving toward complete automation of proc-

esses. • 24x7 operation with a production control group, systems administrators, and DBAs on call. • "ey have approximately 45 service level agreements in place with every school and large administrative area hav-

ing a respective agreement. • Weekly change management meetings. • Data center is responsible for hardware acquisition. • Facilities and Quality Engineer responsible for power, space, and HVAC. • House servers for individuals or groups on campus. • Provide caged area for servers to be stored in the data center. • Remote mirroring of critical systems is in place, with annual tests to insure recoverability. • "ey have an offsite recovery facility (Sungard) about five miles from their location. "ey test recovery at least

annually.

32

Recommendations

"e mission of NSIT Data Center Services organization is to strategically, proactively and collaboratively develop, en-hance and support the University's critical computing infrastructure by providing high quality and cost-effective technol-ogy solutions for the University of Chicago’s academic and administrative enterprise operations and support.

VisionBecause of the changing needs of the diverse University of Chicago organizations, DCS consistently evolves its customer service, personnel, infrastructure solutions and cost models to match the technology needs of its partners. "erefore, the vision of DCS also includes leading by example in the implementation of best practices, automated systems, and state-of-the-industry methods in the areas of production control, change management, environmental and security controls, and other data center disciplines. In addition, DCS also assumes a collaborative role in the promotion of best management practices for servers and equipment outside the DCS Computing Center in academic departments and research envi-ronments.

Offered to central administrative and academic technologies as well as hosted partners, these infrastructure services are either fully provided or contracted in whole or part by agreement:

• Custodial operations and monitoring services for mission-critical servers and equipment that require state of the industry security and support. Services include system administration, production control, database administra-tion, various middleware components, data transport, change management,

• "e promotion of strategies for cost-effective, secure, computing architectures that incorporate current trends in storage, processing, power, high availability, and environmental efficiencies,

• Consulting services in the specification, purchasing, configuration, maintenance, and management of hosted hardware and servers,

• Assuming a leadership role in initiatives defining and implementing disaster recovery, business continuity plan-ning, operational fault tolerance, and high availability,

• Provide proactive systems management including systems capacity planning, monitoring for security alerts and other critical updates and patches, and new software to enhance systems functionality.

Summary RecommendationsRun Data Center services like a businessData Center services should be clearly defined in terms of service expectation and pricing as reflected in Service Level Agreements (SLAs). "ey should be provided in a manner which follows established standards, uses best practices in management and operation, and establishes formal process and project coordination and management protocols. Stan-dards and governance process should be established and maintained in partnership with clients to ensure that services are meeting the client needs.

Provide services that can operate in a business modelNew functional areas which should be assigned within the Data Center include a selected subset of core middleware, database administration and data management. Each of these functional areas is focused on operational responsibilities such as installation, performance, overall health of related software and delivery of service. Other significant pieces in-clude managing computer room infrastructure and space, backup, archival, restoration services, hosting services and dis-aster recovery.

33

Manage the scope of Data Center services and responsibilities"ere should be a clear definition of what responsibilities lie within the Data Center that is agreed upon by the entire organization. Conduits for collaboration and participation in other functions within NSIT should also be clearly defined.

Develop staff to support the services of the Data CenterData Center management should take a proactive approach regarding skill assessment, staff development, skill manage-ment and evolution, and career development. While this is particularly important in taking the Data Center into the future, all of NSIT would benefit from a structure which supports these activities.

Address boundary and interaction issuesAlthough these issues are beyond the scope of these recommendations, they are included here because they must be re-solved for the Data Center to fulfill the primary recommendations. "ese issues include resolving broader NSIT and client-related issues, establishing strategies for application service providers and data archiving, and evolution of standards and interfaces over time. Last, but certainly not least, address the ongoing issue of problem ownership.

Additional Comments"e Task Force believes that the Data Center is a critical piece in the overall IT infrastructure for the university, and feels that recent shifts in technology (widespread adoption of virtualization for enterprise application deployment and disaster recovery, enterprise shared storage architectures, update of service-oriented architectures, etc.) further pushes the need for a core data infrastructure (both network and services) on which other tailored services can be built and delivered by both NSIT and other units across campus. "e Task Force firmly believes that the success of the Data Center depends on es-tablishing sound business practices that are based on well accepted standards in the enterprise IT industry, recognizing of course, that such practices should be flexible to accommodate the wide number of client business models (administrative, academic, grant-driven, etc.) that are in place across campus.

34

Appendix: Application Subcommittee Interview Questions

What new technologies do you see coming in your business area?

What current services do you receive from the Data Center?

What new services would you like to see provided by the Data Center?

What application/service life cycle processes would you like to see in place?

How much do you want to be involved in a partnership with the Data Center?

What kinds of interactions would you like to have with the Data Center?

Will your requirements drive you toward an ASP model?

Do see a shift in your requirements of the Data Center physical facility?

Do existing Data Center employees have the skills to meet your needs?

What scalability/capacity planning issues do you foresee?

What are your business continuity requirements?

What are your expectations for your local servers interfacing with Data Center services, such at backup?

What specific changes in your applications are coming in the near to mid term?

35

36

Glossary

24x7: twenty-four hours per day, seven days per week.

24x7 Monitoring: overseeing the execution of a particular task or set of tasks, measuring the performance and response during the execution, reporting exceptions, and alerting personnel of possible problems if measured parameters are found to be outside the normal operating range or the tasks have failed.

24x7 Operations: maintaining core business functionality throughout all hours of the day and night.

24x7 Problem Identification: the process of identifying issues that are hindering the operation of some aspect of the data center. In a 24x7 shop a staff member is on hand or close at hand who can fill this role.

24x7 Problem Resolution: the process of receiving (via problem identification), reviewing and resolving issues that hinder the operation of some aspect of the data center. Staff is on hand 24x7 to provide this service.

24x7 Processes: operational tasks and continual operations, both manual and automated, that run all day and night.

Advisory Process: the process of communicating with clients to determine their needs and the resources required to meet those needs.

Appliance Impact: measures costs and cost reduction of an appliance, but also weighs the enabling value of a technology in increasing the effectiveness of overall business processes.

Appliance: a device that is attached to the network, provides a service or set of services, and is supported and/or managed by a contracted outside entity.

Application Assessment: the process of determining the impact of a particular application on current and future planned business and operational processes. "is can often include an assessment of the application's vulnerability to network based attacks and the value of the data within the context of an application to a potential attacker.

Application Implementation: the process of installing, configuring and troubleshooting a given application or software package/suite.

Application Maintenance: the care and upkeep of an application. "is typically includes application of patches/bug fixes and application upgrades. "is may or may not include end user support.

Application Servers: computers designated for running a particular application or group of applications.

Application Teams: groups of staff members assigned to design, test, deploy and support a given software package.

Archive Management: process by which data is cataloged, maintained, and archived. "is also includes a mechanism by which this data can be retrieved.

ASC: University of Chicago Administrative Systems Council.

ASP Application Impact: measures costs and cost reduction of an externally-hosted application (ASP), but also weighs the enabling value of a technology in increasing the effectiveness of overall business processes.

ASP: application service provider.

ASR: administrative systems renewal,

Asynchronous Messaging: a communication scheme which does not rely on the recipient to acknowledge receipt. Typi-cally used in systems implementing a web services model.

37

Audit: evaluation of servers, dependent services, support, and change management activities against defined standard practices.

Backup Management: administer and manage a backup service.

Backup Service: typically, a centrally run file storage and restoration service that is designed to minimize the risk of data loss on remote computers by copying and maintaining a history of information stored on the remote machines.

Benchmarking: measurement according to specified standards for purposes of comparison. Commonly used to compare alternate products, systems and services and to help improve the measured product, system or service.

Business Continuity Planning: within the context of NSIT, creating a plan to resume partially or completely interrupted functions within a predetermined time after a disruption or disaster.

Change Process: a defined method for affecting, approving, communicating, and managing alterations to systems, their infrastructure, and related services.

Closed-Loop Communication: continued communication up to and through completion with follow-through, verifica-tion, and agreement among all participants.

Collaboration: the act of working jointly, especially in a joint intellectual effort.

Commonality: a single shared or a set of shared attribute(s).

Communication, External: information flow outside of the immediate group, e.g., information flow beyond NSIT with end-users, the university at large or vendors/suppliers.

Communication, Internal: information flow within the immediate group.

Consolidation: combining of services in order to share resources such as services, storage, labor, and space.

Consulting/Analysis: providing technical advice and evaluation of systems and system solutions.

Controlled Change Model: establishing standard processes for testing and verification of changes as a prerequisite to re-ceiving management approval.

Customer Base: within the context of this report, the primary customers who implement and administer systems pro-vided by the Data Center, and include the learning management and library systems. Secondary customers in-clude staff who implement and administer systems used by discrete parts of the university such as schools and departments, and need centralized services and middleware.

Customer Expectations: specific services that customers assume will be provided. Customer expectations can by managed by service level agreements.

Customer Relations: a proactive, ongoing effort providing two-way communication between customers and the Data Center

Cyberinfrastructure: IT infrastructure and services specific to the support of academic research

Data Center & Network Relationship: the data center is dependent upon the network to operate and the network is de-pendent on the data center to provide physical space and server administration.

Data Center (DC): can refer to the central machine room and related technologies and infrastructure. See also Data Cen-ter Services.

Data Center Infrastructure: physical properties of a computer room required to support the computer hardware installed in it such as air conditioning, conditioned power, fire detection, etc.

Data Center Services (DCS): major unit within NSIT that encompasses services and operations of Production Services, Production Technology, and Technology Assessment & Resource Planning, and includes the central machine room, and related technologies and infrastructure.

Data Feeds: information transferred from one system to another. Processes by which data or media are moved to a posi-tion where they can be used by another system.

38

Data Mining: analysis of data in a database using tools which look for trends or anomalies without knowledge of the meaning of the data.

Data Renewal & Evolution: assuring that all data, on-line, near-line, off-line, and archived can be retrieved/accessed when required. Implies that data on old medium will need to be migrated to a more current medium and tech-nology.

Data Warehousing: Maintaining a data warehouse. A data warehouse is, primarily, a record of an enterprise's past transac-tional and operational information, stored in a database designed to favor efficient data analysis and reporting (especially OLAP).

Database: one or more large structured sets of persistent data, usually associated with software to update and query the data.

DCnext: shorthand for the “next evolution of the NSIT Data Center, related staffing and services.”

Defined Service Levels: explicit descriptions of the specific services including time of coverage, response times, scheduled downtimes, etc. "ese can be used to manage customer expectations.

Dependency Management: the process of mapping, monitoring and managing the parts of systems that impact or are impacted by other systems.

Disaster Recovery: a plan to transfer operations and restore services following a major disruption due to forces beyond normal control.

Duplication of Effort: the situation when two separate groups work on the same thing. An example would be if multiple entities within the university were to design separate systems for archiving digital assets.

Duplication of Expertise: the situation when multiple groups within the university hire staff with similar credentials. Not necessarily a bad thing, depending on how esoteric the expertise in question is.

End-User Services: the service is either intended for consumption by customers or has grown to be used by such custom-ers.

Functional Separation: dividing up of a service into multiple parts or layers based on what they contributed to the whole. In context of the Data Center, functional separation refers to dividing an IT service into operating system, da-tabase, and operations.

Funding Stream Separation: refers to either the funding of a single service by multiple sources or the funding of a single entity via multiple sources. When the second definition is used and sources tie back or close to a 1:1 ratio with the services provided then it typically reflects a cost recovery model.

Funding, Core: the basic money allotted out of the university central funding pool each year for the operation of the Data Center part of NSIT. "is needs to be sufficient to support systems and middleware that are identified as core, i.e. those used by all of the university as opposed to a particular department, division or school. Often referred to as “ledger 4” funds.

Funding, Recharge: to provide money for paying for resources (staff, hardware, software, networks, etc) by charging the customers of the data center a fee.

Future Regulatory Compliance: insuring that the University will be able to support any future governmental regulations dealing with data permanence, security, disclosure, or confidentiality.

Governance: the execution of authority over the management of said area.

Hosting: the business of housing, serving, and maintaining files or data, applications, web sites, or security on behalf of another.

Identity Management: a broad administrative area that deals with identifying individuals in a system and controlling their access to resources within that system by associating user rights and restrictions with the established iden-tity.

39

Interoperability: the ability of a system or a product to work with other systems or products without special effort on the part of the customer. Products achieve interoperability with other products using either or both of two ap-proaches: by adhering to published interface standards, or by making use of a "broker" of services that can con-vert one product's interface into another product's interface "on the fly" .

Inventory Analysis: maintaining records of servers, the services that run on those servers, contacts for the hardware, maintenance, and support, and lists of software on each respective piece of hardware in order to address another business need.

Least Number of Servers Model: goal to consolidate applications and related support functions onto servers whose pur-pose is to support certain types of process for similar requirements such as database support, reporting, web services, etc.

Lifecycle Funding: who pays the cost of the product (hardware, software, application, etc) and all associated items (sup-port, infrastructure, maintenance, etc) starting at conception, and going through end of life.

Logging: the process of recording all system modifications, accesses, failures, and activity to a file for the purpose of re-covery or accountability. "e log file can be saved and/or archived, and allows for later recovery, discovery, or reporting.

Long Term Storage: the storage and/or archiving of data in digital form with the purpose of retrieving the data at a later time. “Long Term” implies that the data will be held for a relatively long period of time, measured in months, years, or possibly decades.

Mainframe Model: an industry term for a large computer, typically manufactured by a large company such as IBM for the commercial applications of Fortune 1000 businesses and other large-scale computing purposes. In our par-ticular case, this term is used in reference to our computer system running versions of the IBM MVS Operating System.

Maintenance/Support: the assistance vendors provide to technicians and end users concerning hardware, operating sys-tems, and programs.

Middleware Centralization: the act of placing the management of middleware in a central IT organization. "is organiza-tion provides support, training, standards, and guidance to the users of the middleware service.

Middleware Defined by Application: the process of allowing middleware decisions and acquisitions to be driven by indi-vidual applications on an ad hoc basis.

Middleware Defined by Architecture: the process of making middleware decisions and acquisitions based on a well thought out architecture and system plan. "is implies the use of standards, for individual product purchases, access and use by applications or services, and management and support activities.

Middleware Implementation: the development and installation of any programming that serves to "glue together" or mediate between two or more separate and often already existing programs. A common use is to allow programs written for access to a particular database to access other databases. Typically, middleware programs provide messaging services so that different applications can communicate.

Middleware Maintenance: software upgrades to maintain compatibility of middleware with with the functions, applica-tions, and/or databases it supports.

Middleware Management: administrative function that coordinates such solutions with users, and is responsible for up-dates and appropriate documentation.

Middleware: a set of service components which are usable by many applications or other services; these reusable compo-nent services have uses beyond a single application. [Data Center & Middleware Task Force Subcommittee on Middleware Definition]

Software occupying the layer between the operating system and the application, providing the application with an interface for receiving services; software that is neither part of the operating system, nor an application. It

40

occupies a layer between the two, providing applications with an interface for receiving services8. [Data Core Technology Definition]

Non-Technical Interfaces: a consistent set of rules for communicating with both internal and external business and proc-ess entities.

Non-Traditional Machine Room: phrase used to described a facility used to house enterprise servers that does not have the expected infrastructure normally associated with support such computers, or a typical computer room envi-ronment that is not staffed by support personnel during the course of normal operation.

Open Source: Generally, open source refers to any computer program whose source code is made available for use or modification as users or other developers see fit. Open source software is usually developed as a public collabo-ration and made freely available under license.

Operations: operational component requiring staff and/or other resources

OS System Implementation: the installation and configuration of an operating system.

OS System Maintenance: updating of an operating system as distributed by the vendor who owns that OS.

OS Teams: support personnel who are responsible for the software support of one or more types of computer operating systems such as IBM AIX, Sun Solaris, Microsoft Windows, etc.

PCI: Payment Card Industry Data Security Standard.

Physical Facilities: tangible attributes of the data center including electricity, cooling, network cabling, equipment hous-ing (racks, work surfaces), etc.

Platform Separation: refers to the sub-layers (operating system, hardware) of a technology on which the discrete functions a service operates (see functional separation).

Platform: hardware and related operating system.

Portal: commonly refers to website that acts as a gateway to online services, proposes to be a major starting location for users, or that users tend to visit as an anchor site.

Portfolio Management: the process to establish evaluation and selection criteria, create and optimize planning scenarios, communicate decisions, monitor progress and manage new ideas for the strategic management of organizational information technology investments, resources and commitments in meeting set goals.

Portfolio of Customer Services: a catalog of sorts that details the services (commodities) offered by the Data Center. Some are normal no-charge and others will be fee-for-service.

Problem Management: the formal process whereby individuals come together to identify, resolve, implement, and docu-ment encountered issues. "e process includes keeping a formal audit trail/archive of personnel involved, re-cording findings, documenting actions taken by whom, and logging when (date and time) activities and issues were discovered and resolved.

Production/Development: the physical or virtual separation of the production, test, and development infrastructure (hardware, software, data, etc.) associated with applications/systems supported within the Data Center.

Project Management: function requires effective project management methodologies

Regulatory Compliance: the assurance that we are, to the best of our ability, following all federally mandated compliance requirements and best audit principles.

Resource Sharing: working with other groups and/or other technical and/or non-technical staff to provide guidance, as-sistance, and at times, infrastructure resources to achieve an agreed goal.

Risk Analysis: process of determining the level of risk across a service, suite of services, an organization. In terms of the Data Center, this includes IT services, physical plant, single points of failure, etc.

41

8 Data Core Technology, Inc., July 2005, http://www.data-core.com/glossary-of-terms.htm

Risk Management: putting into action an ongoing plan to address, mitigate, or eliminate risks identified through the risk analysis.

Second Data Center: an additional computing facility located at a distance from the primary data center that allows for critical system redundancy, housing of servers, expansion of IT services, etc.

Security, Physical: tangible steps such as locks, security guards, keypad entry systems, access card systems, etc. that are used to ensure the safety of physical resources from unauthorized access, contact, tampering and theft.

Security, Virtual: commonly refers to managed controls on network-based access to servers, operating systems, applica-tions, and data, and includes the use of monitoring software to report on who accessed/altered what and when.

Service Level Agreement: an agreement between the supplier of a service and the users of that service that sets out the levels of service that will be offered, preferably in quantitative terms, and the obligations on the user of the serv-ice. A typical agreement for a computing service or network service will set out the expected levels of service measured in terms of one or more of the following: availability, fault reporting, recovery from breakdowns, traf-fic levels, throughput, response times, training and advisory services, and similar measures of the service quality as seen by the end-user. "e agreement will also set out user costs and charges, the provision of access to prem-ises for service contractors, and standards of training to be achieved by users. "e agreement may form part of a legal contract, yet is equally likely to be found within a single large organization where one unit within the or-ganization offers services to other units, but where a legally enforceable contract would not be appropriate.9

Service: work that is done to satisfy the needs or wants of a customer.

Service Oriented Architecture: a loosely-coupled IT architecture that relies on the interaction of software agents for communication, interaction, and data transfer.10

Skill Renewal & Evolution: providing the necessary professional development opportunities for staff that are in step with changes in technology, services, and resource needs.

SLA: service level agreement.

SOA: service oriented architecture.

SOAP: simple object access protocol

Soft-Drop: saying “no” with a recommendation of alternatives.

Standard Bindings: common methods of exposing data for the purpose of exchanging information between systems.

Standard Environment: a common platform (or platforms) that can be used with a service.

Standard Interfaces: consistent set of rules used for communicating between systems.

Standard Support: the default Service Level Agreement if none is explicitly defined.

Standard Vendors: common pool of suppliers to choose from for providing some or all parts of a service.

Standard: documented criteria, voluntary guidelines, recommended practices, best practices, or mandatory requirements.

Strategy: a defined, well-known direction for a particular function or service.

Technology Assessment: process of looking at innovation to determine if its introduction into current or planned services would be beneficial.

Technology Procurement: process of acquiring information technology-based goods or services.

Technology Renewal & Evolution: keeping the computing infrastructure up-to-date as technologies evolve.

42

9 A Dictionary of Computing. Oxford University Press, 2004. Oxford Reference Online. Oxford University Press. University of Chi-cago. 6 September 2005.

10 http://www.xml.com/pub/a/ws/2003/09/30/soa.html

Test Environment: the hardware, software, and tool suite used by an applications and/or systems team to simulate, evalu-ate, and assess the interaction, performance, and stability of systems, drivers, and/or applications.11

Tools: off-the-shelf or locally developed software applications that aid in or monitor the operation of a function

Transaction Monitors: middleware programs that mediate between clients and servers in order to optimize database per-formance by acting on behalf of the clients12; tools used to measure and analyze the performance of transaction-oriented systems such as databases, web applications, etc.

Vendor-Application Impact: the effect on business processes caused by changes, modifications, and/or updates to tech-nology provided by a vendor.

Web Services: modular, self-describing, self-contained services that are accessible by means of messages sent using stan-dard Web protocols, notations, and interfaces13 that abstract what is required of transactions from the software that stores and processes data14.

Workflow Assessment: analysis and evaluation of a workflow.

Workflow: the tasks, procedural steps, organizations or people, required input and output information, and tools needed for each step in a business process15.

43

11 http://www.sitetestcenter.com/software_testing_glossary.htm

12 http://www.computerworld.com/databasetopics/data/story/0,10801,64305,00.html

13 http://www.ecots.org/ and http://www.w3.org/2003/glossary/subglossary/xkms2-req/

14 Turner, Budgen, Brereton, “Turning Software into a Service,” Computer, Oct. 2003, http://doi.ieeecomputersociety.org/10.1109/MC.2003.1236470

15 http://www.data-core.com/glossary-of-terms.htm

44

References

A Master Plan for Administrative Systems. Administrative Systems Council, Univ. of Chicago, September 2001, public release ver. fall 2002

Barnes, Michael. “Middleware Convergence and Application Platform Suites: Understanding Business Benefits.” Gartner, Inc. Research, August 5, 2005, ID# G00129353

Biscotti, Fabrizio, Joanne M. Correia, Laurie F. Wurster, and Yanna Dharmasthira. “Forecast: AIM and Portal Software, EMEA, 2004-2009 (Executive Summary).” Gartner, Inc. Research, August 1, 2005, ID# G00129920

Bitterer, Andreas. “IBM’s Data Integration Strategy Takes Shape.” Gartner, Inc. Research, September 15, 2005, ID# G00129345

“Card Holder Information Security Program.” Operations & Risk Management, Visa U.S.A. , 2006, http://usa.visa.com/business/accepting_visa/ops_risk_management/cisp.html

Correia, Joanne M., Fabrizio Biscotti, Yanna Dharmasthira, and Laurie F. Wurster. “Forecast: AIM and Portal Software, Worldwide, 2004-2009 (Executive Summary).” Gartner, Inc. Research, July 13, 2005, ID# G00129362

Dorr, Erik. “What Oracle’s ‘Fusion Compliant’ Means and Why You Should Care.” Gartner, Inc. Research, August 8, 2005, ID# G00129210

Federal Enterprise Architecture Framework. September 1999, ver. 1.1

Geishecker, Lee and Simon Hayward. “Oracle’s Fusion Affects Applications and Middleware.” Gartner, Inc. Research, July 22, 2005, ID# G00129360

Genovese, Yvonne, Michele Cantara, Simon Hayward, Gene Phifer, Massimo Pezzini, and Yefim V. Natis. “NetWeaver Positions SAP as a Business Process Platform Enabler.” Gartner, Inc. Research, May 6, 2005, ID# G00126204

Handler, Robert A. and David Newman. “Use Enterprise Information Architecture Techniques to Move to Information Management.” Gartner, Inc. Research, September 2, 2005, ID# G00130267

Holincheck, John, Simon Hayward, Jeff Woods, Lee Geishecker. “Oracle’s Project Fusion: Application Questions.” Gart-ner, Inc. Research, September 2, 2005, ID# G00129933

Malinverno, Paul. “Integration Competency Centers Demand a Wide Set of Skills.” Gartner, Inc. Research, August 22, 2005, ID# G00128007

Malinverno, Paul.“"e Four Steps to Building an Integration Competency Center.” Gartner, Inc. Research, October 19, 2004, ID# G00123448

“MasterCard Site Data Protection Program.” MasterCard International, Inc., 2006, https://sdp.mastercardintl.com/

Natis, Yefim V., Massimo Pezzini, and Kimihiko Iijima. “Magic Quadrant for Enterprise Application Servers, 2Q05.” Gartner, Inc. Research, April 15, 2005, ID# G00127137

Network and Server Systems Service Portfolio v1.0. Harvard University Information Systems, September 2005

Oracle Applications for Higher Education: Committed to Your Success. Oracle White Paper, July 2005

Payment Card Industry Data Security Standard. MasterCard International, Inc., January 2005

45

Pezzini, Massimo and Yefim V. Natis. “Oracle Repositions Middleware to Bring Applications Together.” Gartner, Inc. Research, May 2, 2005, ID# G00127472

Pezzini, Massimo, and Jess "ompson. “IBM’s WebSphere Integration Offering Finally Gets Integrated.” Gartner, Inc. Research, April 10, 2006, ID# G00138448

Pezzini, Massimo, Kimihiko Iijima, John Radcliffe, and Yefim V. Natis. “Oracle Fusion Middleware Takes the Limelight.” Gartner, Inc. Research, December 22, 2005, ID# G00136886

Pezzini, Massimo, Kimihiko Iijima, Yefim V. Natis, and John Radcliffe. “"e Myths and Realities of Oracle Fusion Mid-dleware.” Gartner, Inc. Research, December 22, 2005, ID# G00132703

Pezzini, Massimo, Yefim V. Natis, and Yvonne Genovese. “New Partnerships Raise SAP NetWeaver’s Industry Standing.” Gartner, Inc. Research, May 25, 2005, ID# G00128076

Prior, Derek. “SAP NetWeaver Projects Need Careful Infrastructure Planning.” Gartner, Inc. Research, May 11, 2004, ID# TG-22-6287

Schulte, Roy W., Jess "ompson, Yefim V. Natis, Massimo Pezzini, Jim Sinur, L. Frank Kenney, Michele Cantara, Joanne M. Correia, Paolo Malinverno, Benoit J. Lheureux, and Kimihiko Iijima. “Magic Quadrant for Integration Backbone Software, 1H05.” Gartner, Inc. Research, April 15, 2005, ID# G00127186

"ompson, Jess. “Consider Vendor-Related Criteria When Choosing AIM Technology.” Gartner, Inc. Research, July 20, 2005, ID# G00129298

Turner, Mark, David Budgen, and Pearl Brereton, “Turning Software into a Service,” Computer, October 2003, http://doi.ieeecomputersociety.org/10.1109/MC.2003.1236470

Voloudakis, John. “Hitting a Moving Target: IT Strategy in a Real-Time World.” EDUCAUSE Review, vol. 40, no. 2, March/April 2005: pp. 44-55

White, Andrew, John Radcliffe, Yvonne Genovese, and David Newman. “SAP Revises Master Data Management Prod-uct Road Map.” Gartner, Inc. Research, June 13, 2005, ID# G00127307

Woods, Jeff, Simon Hayward, James Holincheck, and Lee Geishecker. “Oracle’s Project Fusion: Technology Questions.” Gartner, Inc. Research, August 25, 2005, ID# G00129986

46