17

IBM GTO 2010

Embed Size (px)

Citation preview

Page 1: IBM GTO 2010
Page 2: IBM GTO 2010

Imagine being able to spin the clock forward to peek into the world of the future – to wonder at the advances of society, business and technology five to ten years from now.

That’s exactly what IBM Research has strived to do since 1982 with the Global Technology Outlook (GTO). IBM Research and its global community of some of the world’s top scientists consider the cultural and business applications in which technologies will be used – and the impact they will have on IBM and the world.

The GTO provides a comprehensive analysis of that vision. In past years the GTO has predicted such trends as autonomic and cloud computing, the rise of nanotechnology, and the ability to create smarter cities and smarter systems through the application of analytics.

An intensive year of work goes into each GTO – generating the ideas, gathering data, debating the issues with colleagues and peers, and then presenting a final report to IBM Chairman and CEO Samuel Palmisano. Throughout the process, the analysis of the GTO topics focuses on both the technology at hand and the business and societal implications of each trend being analyzed.

Inside IBM, the GTO is used to define areas of focus and investment. Externally, it is shared broadly with a range of IT influencers - including clients, academics, and partners - through education and client briefings.

This outlook is not designed to singularly benefit IBM. In fact, some of the trends explored may be well beyond IBM’s portfolio of offerings. And that’s what makes the GTO

such a success each year – providing clients and partners with an impartial take of the world and the evolution of IT across business, economic and natural systems.

This year’s GTO is no different.

When IBM set its sights on helping to create a smarter planet two years ago, we predicted the evolution of an increasingly instrumented, intelligent and interconnected world. Since that time, the application of technology has helped create real-world monitoring and data prediction for water systems, cities and municipalities, utilities and power grids, financial and logistics providers, and traffic and transportation entities.

The next phase of smarter planet looks to build upon that progress and momentum. How can these systems be optimized, adapted and reused to create increasingly intelligent modeling and data-based prediction? Can technology help transform healthcare to provide better patient outcomes while reducing costs? Will advances in software help business leap past the performance limitations of hardware? Can new computing models like cloud and hybrid computing help manage legacy hardware, processes and applications? What can be learned by looking across multiple systems and platforms that can help optimize workloads?

The 2010 Global Technology Outlook that follows takes a far-reaching view into these challenges at a global level and offers insights into the future. This report is designed for your organization to benefit from them as much as we have here at IBM.

Introduction

Page 3: IBM GTO 2010

The increasing complexity of healthcare is a major contributor to rising costs. At the same time, healthcare is becoming an IT-based industry, based on a continuum of machine generated digital information.

Today, emerging early diagnostics can lead to earlier intervention, resulting in improved outcomes and lower costs at the front end. But what emerges at the back end are reams and reams of data that, when added to the data already out there, potentially complicates decision making at the point of care.

To improve healthcare we must be able to make data more usable by synthesizing evidence from it, making the data more readily available at the point of need.

Improvements in the delivery of healthcare increasingly will rely on having usable evidence readily available at the point of care. Systems that are designed to provide clinical decision intelligence and evidence-based assistance can improve the ability of providers to anticipate need and tailor delivery.

Evidence generationThe process of transforming data into evidence is called the generation phase. It is in this phase that comparative effectiveness and practice-based evidence studies, using the best available evidence gained from the scientific method, are used to make medical decisions.

To accomplish this, cloud computing and data trust infrastructures are used for data collection, aggregation, integration, information management and curation. The resulting federated data structures then are available for evidence generation using smart analytics tools.

The ensuing evidence whether in evidence registries or other forms then can be easily synthesized, extracted, disseminated and consumed by patients, providers, regulators, researchers, and wellness management services.

Page 1

Page 4: IBM GTO 2010

Service qualityThrough business process transformation and increasing the use of IT to generate and process data and to generate data-based evidence — care providers and related entities can improve workflow efficiency, increase safety and reduce errors. This will enable new collaborations among the various entities in the healthcare ecosystem, such as providers, insurers, payers, patients, pharmacies and regulators. It also will enable new delivery models, such as patient-centered collaborative care.

Novel incentives Insurers and services firms may offer financial incentives to healthcare providers and practitioners to adopt evidence-based systems, allowing them to reduce costs while improving patient care.

Evidence-informed, outcome-based payment modeling and contract optimization services will help create and sustain a virtuous cycle where outcome-based payment incentives lead to improved outcomes and demand for evidence at the point of need. This drives large scale evidence generation and comparative effectiveness which in turn enables the creation and continued evolution of an evidence-centric healthcare ecosystem.

ConclusionAs healthcare is evolving to become an increasingly IT-based industry, evidence rapidly is becoming its currency. It accelerates the transport and availability of data; enables new modeling capabilities, analysis and consumption; and does so while simplifying a complex industry.

Page 2

Page 5: IBM GTO 2010

The vision of a smarter planet – one that is highly instrumented, interconnected and intelligent – is one that already is in motion. Projects underway in smarter city programs in locations as diverse as the Isle of Malta; Dublin, Ireland; and Dubuque, Iowa, are laying the groundwork.

IBM CEO Sam Palmisano challenged IBMers to create 300 smarter solutions in partnership with clients in 2009. More than 1,200 examples were brought forward in every major industry, both in the developed and developing world.

As a result, hundreds of projects are underway with clients and partners ranging in size from local fire and police departments to country-level governments. Transportation systems, financial systems, and energy and transmission systems are being monitored; environmental systems including tides and currents being measured; and millions of transactions are being captured and analyzed using technology and solutions designed to help them be better understood, managed and made smarter.

As these systems become more pervasive, we will have unprecedented real-time visibility into power grids, traffic and transportation systems, water management, oil & gas supplies, and financial services. And at some point, we will need them to share their information more effectively and ‘talk’ to each other as part of a larger framework.

Modeling the smarter planetA smarter planet solution is a closed-loop ‘system of systems’. The starting point of such a solution is always the real world itself – whether it is a smarter grid, building, supply chain or water system. The instrumentation integrated into these systems provides the mechanism to capture real world observations through digital models. Since the real worlds are interconnected and interdependent, the modeled worlds will be too.

Connecting these systems will require digital representations to help assess the complexity, maneuver through environmental variables and achieve more predictive outcomes.

Page 3

Page 6: IBM GTO 2010

These models will help assimilate and stitch together the captured data, helping to interpolate or extrapolate into areas where data is not yet available. This will help generate a predictive analysis to reach the most plausible theory to explain the available information. Assistive technologies and increasingly intelligent instrumentation and modeling then can be applied to real world systems.

As the components for analyzing these systems become more standardized, data from other types of models - business and enterprise systems, physical IT networks, social networking, industry frameworks, behavior models - can then be captured and shared across platforms from varying vantage points.

It is through orchestrating these models that we will be able to capture the digital representations of how the business, human, and physical worlds interact and to predict, manage, and provide continuous progress in a smarter planet.

ConclusionHundreds of projects designed to make a variety of systems more instrumented, integrated and intelligent already are underway globally. Systems as varied as utilities, energy and transmission grids, waterways and financial transactions are being measured and monitored to help make them smarter.

A platform for enabling and facilitating synthesis and orchestration across these environments and multi-modal information systems is potentially the most promising anchor point for creating repeatable smarter planet solutions. There also are broad opportunities as a result of the exploration of broader coverage of interconnected world models to unlock insights and produce predicted business outcomes.

Page 4

Page 7: IBM GTO 2010

In the near future, three software trends spanning the entire software stack will significantly impact enterprises and their IT support.

First, at the top layer, or business layer, upper middleware will give executives increased visibility and control over how their business operates.

Next, at the platform layer, new control technologies will enable clients to dynamically determine which elements of their IT stay in-house and which elements are “outsourced” to a cloud computing provider.

Finally, the development of a new programming language can help software developers and consumers more easily access parallel computing abilities through multi-core and hybrid architectures.

Upper middlewareToday’s complex application portfolios, including legacy, custom and packaged applications, support important operational functions but limit the ability to innovate, differentiate and compete. The applications’ fixed business processes, rules and information models frequently make it expensive and time-consuming to alter operations to reflect the changing — often dynamic — needs of businesses. As a result, enterprises operate on application logic as opposed to business needs, making it increasingly difficult to shed obsolete applications and innovate to meet those needs.

However, a confluence of emerging technologies — including improved modeling, analytics, rules engines, discovery mechanisms and monitoring tools — is making it possible to return more control of business operations to business executives.

Upper middleware offers a new perspective on business operations and uses new approaches to business-level models to describe, in business terms, how a business needs to operate. This enables business users to define and manage end-to-end processes and optimize business outcomes. It also enables business users to think holistically about the processes, information, business rules and analytics in a business context. Then, discovery mechanisms helps business users understand how to leverage their current application environment and other IT investments to innovate operationally.

Client controlled cloudCloud computing has the potential to become the dominant means for delivery of computing infrastructure and applications on the Internet. As cloud takes hold, the control components of software that are used to create, configure and adjust the nature of applications will be

Page 5

Page 8: IBM GTO 2010

separating out from the cloud data functions and moving to the edge of the enterprise.

A significant result of cloud computing is the rapidity and ease of creating new computing functions and services in the network. However, developing new software applications may be of limited use to established businesses that have a significant amount of existing data sources and applications.

New value is more commonly derived by combining the existing data and applications with new types of processing functions. This combination process creates a network of applications and data services, jointly referred to as computing services. The advent of cloud computing can help enterprises derive business value by keeping some critical IT elements in-house and outsourcing other elements into the cloud.

The challenge, however, is maintaining control of policies and guidelines that cloud computing outsourcing entails.

Although the separated control functions can take many different forms, one possible manifestation could be that these functions emerge in the form on an on-premises appliance system or software layer. The on-premises system will result in the creation of what we are calling a Client Controlled Cloud (C3). In the C3 paradigm, computing services are created and composed from a variety of cloud providers and existing services in the enterprise to create solutions for the enterprise. However, the control and management of such solutions remains strongly within the administrative domain of the enterprise client.

Multicore programming modelDue to power issues, hardware alone no longer can provide the single-thread performance increases that the industry has seen over the past few decades. Instead, performance increases will come from parallelism, either homogeneous (via multicore processing) or heterogeneous (via hybrid systems). In addition to significantly increasing applications that demand more performance, this shift may also affect the productivity of software development. In the past, parallelism was optional and attracted expert programmers, but from now on, all applications that require a certain level of performance will need to exploit parallelism.

Unfortunately, the number of skilled programmers that can thrive with existing parallel programming models is not growing, while new application domains are leading towards an increasing number of non-expert programmers writing software programs.

Although many companies are exploring new programming models for the multicore era, most efforts are C/C++ derivatives, which will narrow the set of programmers who can safely exploit parallelism, and in the process losing the productivity gains that have been realized with Java.

In contrast, IBM is developing X10, an evolution of the Java programming language for concurrency and heterogeneity, leveraging more than five years of research funded by the Defense Advanced Research Projects Agency (DARPA). IBM is investing significantly in developing the ecosystem around X10 with significant collaborations with two dozen universities.

Page 6

Page 9: IBM GTO 2010

X10 allows programmers to expose fine-grained concurrency in their applications, which should be a good match for the multicore era. It also enables programmers to naturally describe heterogeneous computation at the language level without the tedious message-passing constructs. X10 has been used to demonstrate a 6x productivity improvement over the C message passing interface.

ConclusionSoftware will play an increasingly important role in helping to achieve new computing performance standards and maximizing the potential of distributed or workload optimized computing systems.

Upper middleware will help provide a unified dashboard view to non-technical users across layers, provide a holistic business view that will enable better decision making and improve the ability to shed legacy applications for more transformative ones. Enhanced discovery tools will help this operational innovation model evolve.

As cloud computing becomes more pervasive and easier to implement, automatic provisioning at the platform layer will simplify decision making on which functions and applications can be outsourced or move to the edge of the enterprise.

By adopting the X10 programming language, software developers will contribute to the future of software-based computing acceleration across increasingly interoperable, scalable, parallel computing systems.

Page 7

Page 10: IBM GTO 2010

The average corporation spends around 70 percent of its IT budget to keep existing operations going, with a significant portion allocated to maintain and improve their legacy equipment.

Many more are spending close to 85 percent of their IT budget for ongoing operations, leaving only 15 percent available for innovation. This level of constraint leaves these companies at a high risk to completely lose their IT-driven business agility.

Highly efficient companies, on the other hand, spend only 60 percent of their IT budget for ongoing operations, leaving 40 percent for new and innovative initiatives. Such companies have a competitive edge based on continued high business agility and the ability to constantly differentiate.

Legacy is by no means limited to ‘old hardware.’ It actually is found across the entire IT value stack. As soon as a business model, process, software, data format or infrastructure is deployed, it is considered to be legacy. It even includes soft factors like 'know-how' legacy. And as with all value stacks, handling it in an integrated way results in the highest value.

New and emerging technologies and new business realities are changing the game for legacy efficiency improvements across the legacy value stack. These include:

Page 8

Page 11: IBM GTO 2010

The integrated legacy control loopTo help manage and actively handle these legacy classes in a repeatable pattern, a legacy control loop should be implemented. This control loop would feature three defined positions:

Each position in the loop would feature a variety of actions supported by corresponding tool sets and value prediction tools.

There are two ways to apply the legacy control loop. The legacy control cycle can be applied selectively to one legacy class, even to one single legacy item. But when applied in the form of an integrated legacy service offering across several classes, it provides significantly higher value.

For the ‘Identify’ position of the control loop, a legacy dashboard & workbench can be used to map applications, data and infrastructure to a business process – and vice versa. This would help provide valuable insight into the complex relationships between infrastructure, data & applications and the related business processes. It also could establish a legacy inventory to help facilitate and maintain continued optimization.

The ‘Improve’ position of the legacy control loop includes all actions suitable to positively changing the legacy system. While that part of the legacy considered to be a burden is reduced, the heritage of the legacy is fostered and better leveraged. A comprehensive tool suite would help leverage insights gained from the legacy dashboard.

Last but not least, the ‘Operate’ aspect of the legacy control loop ensures continued operation, and includes actions to reduce future legacy issues. As part of this, a legacy council would, for example, review all IT procurements in order to ensure that new acquisitions are ‘legacy proven’.

ConclusionIn order to maintain high business agility and the ability to differentiate, constant and repeated legacy efficiency improvement with double digit savings per year in ongoing IT operations is a must.

To achieve this, two key imperatives have to be followed. First, legacy efficiency improvements have to be applied in an integrated fashion across the legacy value stack layers and not just to one layer at a time. Second, a structured approach needs to be leveraged to guarantee repeated savings, which are achieved by repeatedly applying the legacy control loop with the three basic steps of identify - improve – operate.

Page 9

Page 12: IBM GTO 2010

The ongoing, exponential increase in wireless traffic globally is driving the need for more efficient use of both radio spectrum and wireless infrastructure capacity.

Spectrum constraints are due to both the limitations of current radio-frequency allocation and the limitations of radio interference. The wireless infrastructure capacity is constrained by network bandwidth limitations, upgrade costs and logistics, and the legacy of existing installations.

Both sets of constraints are becoming bottlenecks. Addressing them offers significant opportunities for holistic, system-wide optimizations that can create new efficiencies and new business opportunities.

From an application perspective, there are three major areas driving significant growth of wireless traffic:

Page 10

Page 13: IBM GTO 2010

The existing wireless infrastructureCurrent wireless network architecture has three regions: the radio access network (RAN), the backhaul to the radio network controller (via microwave or fiber), and the core network.

Each of these three architectural regions poses challenges:

The RAN is limited by the reusable spectrum for wireless mobile communication. For instance, the data-intensive applications popularized by devices such as the Apple iPhoneTM are challenging the cellular wireless network’s ability to effectively support user hot spots. The wireless industry is trying to solve these issues by making wireless cells smaller in hot spots and using more complex signal processing to improve spectral efficiency. The smaller cells will drive the need to manage radically distributed systems, and the signal-processing intensive solutions drive the need for new and more powerful systems and system architectures.

Backhaul capacity is limited primarily due to the proliferation of cells to address spectrum re-use. Current industry approaches to address the growth of aggregate traffic involve upgrades of microwave backhaul or installation of fiber backhaul, both of which require substantial capital investments. It is important to reduce the growth of traffic over backhaul links using approaches such as content caching and traffic shaping at the wireless edge.

Core networks involve multiple tiers, typically with large latency for media applications. A flatter and more intelligent core network is

required to reduce latency for media applications and to enable advanced services to overcome performance variability in the wireless network.

New approaches for the futureTo address these constraints and challenges, new technologies are emerging for the future wireless infrastructure.

Page 11

Page 14: IBM GTO 2010

ConclusionIn the future, given the huge anticipated growth in wireless bandwidth requirements, and the backdrop of both spectrum and infrastructure constraints, there will be a huge opportunity for new systems and solutions to enable future edge of wireless network services.

A key dimension of this opportunity will be the ability to enable & optimize service quality, mobility, and cost using computational IT platforms, tools, practices and approaches. This profound shift represents a new paradigm of wireless and IT infrastructure convergence.

Page 12

Page 15: IBM GTO 2010

Over the next several years, there will be a significant transformation of IT delivery, with increasing focus on manageability, reduction in operating costs, and improving time-to-value for delivering value from IT to line of business owners and customers.

This will include a selective adoption of “private Clouds” and cloud services and a re-thinking of the traditional layer-by-layer approach of building and managing IT systems, with their multiple layers of applications, middleware, infrastructure software, servers, networks, and storage.

In the traditional approach, integration of the overall “system” happens only in the customer data center, resulting in significant integration cost, elongated deployment time, and legacy management complexity. The resulting environment is difficult to adapt to changes in demand and usage.

IT suppliers, including IBM, will address these problems by introducing Workload Optimized Systems (WOS) that are pre-integrated and optimized for important application domains, such as data analytics, business applications, web/collaboration, compliance and archiving.

These WOS will deliver transformational improvements in client value by integrating and optimizing the complete system at all layers.

For example, a WOS for data analytics could transform user interaction with large data warehouses by reducing the variance in the time to satisfy a user query to the data warehouse.

Workload optimized continuumThe client value provided by WOS includes improvements in functionality, reliability, availability, time to deployment (which is also time-to-value), as well as reductions in operating costs through better manageability and improved energy efficiency.

In current industry practice, we see increasing functionality and client value with improvements in individual components over time. For instance, processing based on transistor density has been increasing with Moore’s Law (2X every two years). However, the resulting boost in performance increasingly is in the form of more parallelism (more cores, more threads per core). Delivering application level increases in performance will require optimization of the software stack to exploit this parallelism. If such changes are not made, progress will not keep pace.

In this GTO topic, a WOS continuum is defined that demonstrates an increase in client value – at a rate that is much higher than current industry practice. The following describes three levels in the WOS continuum:

Page 13

Page 16: IBM GTO 2010

integration, customized integration and hardware/software co-optimization:

Infrastructure and technology In addition to looking at some WOS examples, one also must investigate the underlying system infrastructure. In looking at many important

classes of workload optimized systems, IBM researchers have discovered that there are common patterns found in them. As this new paradigm of systems establishes itself, it is anticipated that only a few platforms - numbering in the single digits – will emerge to support workload optimized systems.

A platform for WOS will comprise common components like:

These systems will require system-wide co-optimization of hardware and software, and will exhibit a tight connection between software, compute elements, extended memory and storage elements, and high-speed network. As new technologies emerge in these areas, WOS can exploit the new co-optimization opportunities across the HW/SW stack.

ConclusionIncreasing client value will be delivered through different levels of integration moving through the continuum of WOS to deliver high levels of co-design and co-optimization across the hardware and software stack. Successive refinement and expansion of customer value through a multi-year roadmap will be necessary to overcome any perception that a WOS is a one-of-a-kind hardware solution for each application and that many application/workload classes can be supported by a small set of workload optimized platforms.

Page 14

Page 17: IBM GTO 2010

IBM, the IBM logo, ibm.com, are registered trademarks or trademarks of International Business Machines Corporation in the United States and/or other countries. Other company, product and service names may be trademarks or service marks of others. © Copyright IBM Corporation 2009. All rights reserved.

International Business Machines CorporationNew Orchard Road, Armonk, NY 10504