11
IT INFRASTRUCTURE ENDGAME Ravi Sriramulu Senior Advisor, Sales Engineer Analyst Dell EMC [email protected] Knowledge Sharing Article © 2017 Dell Inc. or its subsidiaries.

IT INFRASTRUCTURE ENDGAME - Dell Technologies

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

IT INFRASTRUCTURE ENDGAME

Ravi SriramuluSenior Advisor, Sales Engineer AnalystDell [email protected]

Knowledge Sharing Article © 2017 Dell Inc. or its subsidiaries.

2017 Dell EMC Proven Professional Knowledge Sharing 2

Table of Contents

Abstract ........................................................................................................................................... 3

Introduction ..................................................................................................................................... 4

Evolution ......................................................................................................................................... 4

Electronic Accounting Machines ................................................................................................. 4

Mainframe and Minicomputers.................................................................................................... 4

Personal Computers ................................................................................................................... 5

Client-Server ............................................................................................................................... 5

Enterprise Internet Computing .................................................................................................... 6

Third Platform .............................................................................................................................. 6

It’s all about Time ............................................................................................................................ 6

Challenge ........................................................................................................................................ 7

Defining the Infrastructure .............................................................................................................. 8

Conclusion .................................................................................................................................... 10

Disclaimer: The views, processes or methodologies published in this article are those of the

author. They do not necessarily reflect Dell EMC’s views, processes or methodologies.

2017 Dell EMC Proven Professional Knowledge Sharing 3

Abstract

Everything evolves at a certain pace and there are many laws which define how and why things

evolve. IT has come a long way from Electronic Accounting Machines and Mainframes to

second-platform which includes Client-Server and personal computing to third-platform including

Mobile/Cloud Computing, Big Data Analytics, Social Media extending to Internet of Things.

There are many organizations which still use second platform computing and even Mainframes

while some are deliberating about a fourth platform. Whether there will be a fourth and fifth

platforms or not, one thing is common among all these platforms, i.e, consuming data in some

form, performing operations on data and generating new data.

At the core of all this, from Mainframes to third platform computing, are compute, storage,

networking and software. The combinations of these and how we use them, in varying degrees,

define the various platforms that we know of. There certainly is an evolution in the speeds at

which the actual hardware infrastructure operate, however, the various computing platforms are

not defined by how fast a certain processor, drive or a cable is but how we combine them to

solve the real-word problems.

Organizations are struggling to adapt to the market trends, nobody wants to be the next Kodak,

Nokia or BlackBerry. There is a need of an infrastructure which puts an end to ever changing

trends, which fits into every computing platform there exists today and also adapts to the future.

The key for such an Endgame is an open infrastructure, which can take various forms and

shapes at the click of a button. Let’s discuss the characteristics of such an Infrastructure

Endgame Platform and how it can be achieved using the various components that we have

today, in EMC and the rest of the world.

2017 Dell EMC Proven Professional Knowledge Sharing 4

Introduction

People consume just about anything – food, information, entertainment, money, etc. Every

country in the world is built on a consumption-based economy. Data defines the trend of

consumption, and organizations are trying feed the market by fiddling with the data and

transforming it to a shape that is appealing and easily consumable by its customers. One can

divide the whole procedure into the following three steps:

1. Input – Collect and store the data

2. Process – Operate on it

3. Output – Output the results

All digital applications can fit in this model, for instance, e-commerce websites collect

information about the items to be sold from the sellers, store the information and then process it

so that it is displayed in the catalog, which is the output. The same steps can happen on-the-fly

in an on-demand taxi service app; when the user launches and requests the cab, the app sends

request to its servers, which stores the locations of nearby cabs, processes this data, adds

pricing and transfers the result back to the app in the form of small cars on the map. Every

application performs these 3 tasks in its own way using the intelligence built into the software.

Any working software requires 3 physical components:

1. Compute – where data is processed

2. Storage – where data is stored

3. Network – using the data which is transferred

Evolution

Electronic Accounting Machines

Also known as tabulating machines or punched-card machines, EAMs helped in summarizing,

sorting and accounting information. A human machine operator acted as the Operating System

and controlled the entire system and its resources. The first industrial application of such

machines was used to process data for the 1890 US Census. The machine was developed by

Herman Holllerith and the company which he found to build machines like this would later

become IBM. Punched card machines were widely used by US and its allies during World War II

to store personal records and payroll information. During war time, IBM introduced machines

using electronic vacuum tube circuits in specialized war equipment and later on built commercial

products that employed vacuum tube technology.

Mainframe and Minicomputers

IBM developed the first mainframe computer using thousands of vacuum tubes that solved

addition and multiplication problems in 6 seconds. As the demand for big computers that could

do more work in less time increased, transistor-based computers started replacing vacuum-tube

machines. Early mainframes had no explicit user interface; input was through punched cards

later replaced by magnetic tape. Typewriters became the consoles in the 1970s. They operated

in batch mode to support back office functions, such as customer billing and accounting for

banks and insurance companies processing millions of records each day.

2017 Dell EMC Proven Professional Knowledge Sharing 5

Mainframes were hugely successful because of their robustness, stability and reliability that

would run uninterrupted for decades. They are modular in nature; upgrades required only a

portion of OS to be reset and way ahead of its time, supporting virtualization, load balancing,

etc.

Mainframes incurred very high costs on their buyers. During the 1980s, minicomputer-based

systems grew more sophisticated and were able to displace lower-end mainframes. These

computers, sometimes called departmental computers, sold for much less than their bigger

cousins. But soon mini computers started to decline in the face of versatility offered by personal

computers. Mainframes also declined considerably and speculation grew that the last

mainframe would be unplugged on Dec 31st, 1999. However, there still are many companies

running their most important core businesses on mainframes. The major problem that

mainframes pose today is its ageing programmers.

Personal Computers

Mainframes and especially minicomputers were being designed to appeal to home users as

early as the 70s and 80s, and many attempts were made by several companies in this respect.

However, PC’s gained popularity as mass market consumer electronic devices only in 1981 with

the development of microprocessors and IBM introducing IBM Personal Computer which also

helped coin the term. Users and small businesses could use these computers in their homes

and offices and do their work without having to reach out to mainframe operators where the jobs

would be queued and shared by many individuals.

As the PC gained wide adoption, companies tried decreasing the cost of the components. Early

PCs had an option of connecting to television sets as terminals. Apple and Microsoft played an

important role in making PCs more interactive. At the same time there were several advances in

storage media from audio cassettes to proprietary magnetic tapes to Hard Disk Drives and

development of initial versions of SCSI to transfer block data between the drives and

microprocessors. These PCs were standalone systems until advancements in Operating

Systems allowed them to connect to network.

Client-Server

Advancements in networking led to the rise of a new model of computing. In this, the desktops

and laptops – called clients – are networked to computers, called servers. Clients communicate

to server computers and request services over the network. Servers are usually larger

counterparts of client computers, and have a lot of processing power, multitasking features, and

offer a wide range of services. This allowed companies to offload the processing to a few

expensive servers and let users request services through several inexpensive computers. As

the need for computing grew, large organizations found it difficult to scale the Client-Server

models. Combining the existing local area networks (LANs) into a single coherent network was

challenging both physically and logically.

2017 Dell EMC Proven Professional Knowledge Sharing 6

Enterprise Internet Computing

Early 1990s saw adoption of TCP/IP, enabling organizations to merge existing networks across

different geographical locations. Software and operating systems were developed which would

enable communication between computers from different hardware vendors that were

previously connected to small isolated networks.

As the connected computers grew, more information started flowing, resulting in a need to store

more data and transfer data at higher speeds. During this era there were several advancements

by several companies and many standards were developed by many associations like IEEE,

SNIA, IETF, ANSI, etc. Finally the enterprise infrastructure market found an efficient,

economical and stable model in RAID arrays, TCP/IP and Fibre Channel protocols which were

vendor-agnostic and used a common SCSI layer to transport block data underneath. Every

organization in the world employs this model in some form or another. This model came to be

known as ‘Second Platform’ in the recent days with the introduction of the next phase in IT

infrastructure called ‘Third Platform’

Third Platform

Organizations around the world developed services for the end-customer like us. However,

several companies/teams which catered to end-customers, acting as middlemen, is now

disappearing. There is a slight shift in the audience recently, the key consumers are those

accessing data from social media, stored in the cloud, through the mobile devices where the

information is fed based on the analytics tailored on a specific user. At the same time there is a

significant rise in a new breed of devices being connected to the internet – security devices,

cameras, light bulbs, digital personal assistant and driver-less cars, to name a few. Not knowing

what to call them, the industry uses the term ‘Internet of Things’ (IoT) to refer to these devices.

Artificial Intelligence (AI) is being pushed into the third platform components at a rate never

seen before; the basis of this intelligence is data and analytics. Every news feed a user receives

has some form of analytics in the background. Similarly, IoT devices rely on analytics to function

and serve users. In order to achieve this, huge amounts of data are collected per user,

analyzed, and results are used to help better serve the consumer every day. Typical enterprise

IT infrastructure is finding it difficult to cope with such large and dynamic workloads and hence

organizations are moving from traditional storage area networks (SANs) to large server farms or

hyperconverged systems, which can scale dynamically with ease.

It’s all about Time

If you take a step back and observe what has been going on in the world of IT infrastructure or

in the whole world in general, you would see a tendency to achieve efficiency that is very

obvious of our species. Starting from the accounting machines replacing the human

accountants to the new cloud-based solutions, server farms and hyperconverged systems

replacing back-office client-servers or enterprise SANs, one common element is the continuous

urge to do more things accurately and efficiently in shorter amounts of time. The urge is

relentless, as if there is a race against time. But this has not been just a race; it seems like a

never-ending marathon.

2017 Dell EMC Proven Professional Knowledge Sharing 7

Every time there is a paradigm shift in IT infrastructure evolution, there is a deadly blow to the

whole industry. Existing infrastructure is often rendered useless, incurring huge losses to the

investment. Organizations are often blind-sided, not knowing what to do, often leaving them to

opt for a merger or a split, for better or worse. It has become impossible to predict the future in

the face of ever changing technology. The best remedy for the dynamic nature of the technology

is to adapt and blend in. Enterprises are not really interested in investing in infrastructure that

isn't future-proofed. However, is it possible to build an infrastructure that is really future-proof?

Otherwise seen as the IT Infrastructure Endgame? Maybe not, but there is no harm in trying.

Challenge

There are new software development methodologies such as Agile, Lean and DevOps which

strive to develop software and aim to release updates every day, also known CI/CD –

Continuous Integration and Continuous Deployment. CI/CD is made possible using the tools

which are built with heavy automation layers. These tools can transform an operation-like code

check-in into automated integration, automated build, automatically test it on a test environment

and deploy the software in production. However futuristic this may seem, infrastructure on which

these operations are performed are pretty basic and very slow to respond to any changes often

seen as a bottleneck in the software delivery pipeline.

To create a future-proof infrastructure we have to revisit the core components of computing, i.e.

Compute, Storage and Network. As seen in the evolution of IT infrastructure, Mainframes

coupled these components tightly, while traditional SANs went for an open approach and now

we are back to tightly coupled systems in hyperconverged systems. While keeping compute,

storage and network open offers great flexibility, it adds complexity and makes the system

vulnerable and cumbersome to manage. However, hyperconverged systems are very tightly

coupled, easy to manage but offers very little flexibility.

2017 Dell EMC Proven Professional Knowledge Sharing 8

Defining the Infrastructure

To keep resources and cost balanced, infrastructure should be open and yet connected and act

as a pool of tightly connected resources. Let’s throw in one more IT jargon and call it Hyper-

Connected Infrastructure.

Fig 1. Components of Hyper-connected Infrastructure

Applications running on VMs and Container, and Virtual Desktop

Software Defined Elastic Virtual Pools of Compute, Memory, Network and Storage

Compute

Compute with Storage

Network

Storage Physical Commodity Hardware

Cloud Resources

Out of band

Hardware

Monitoring

Monitoring

with

Predictive

Analytics

Application

Performance

Monitoring

Orchestration with Configuration Management

Infrastructure

as code and

Configuration

as Code

Code Build Integrate Test Deploy Release

Continuous Integration

Continuous Delivery

Continuous Deployment

Continuous Integration/ Continuous Delivery System

Version

Controlled

Infrastructure

2017 Dell EMC Proven Professional Knowledge Sharing 9

As seen in Figure 1, the next generation infrastructure has all its components tied together by

software, allowing it to act as one holistic system which would work like a plug-and-play system

for a variety of applications. The most important characteristics of such a Hyper-Connected

infrastructure are shown below.

Should be highly scalable in compute, storage and network independently

Elastic virtual resource pools - Infrastructure should run on software that would convert

physical resources into easily consumable elastic virtual resource pools. Adding new

hardware to the system should auto-discover and populate virtual pools.

Hybrid hypervisor supporting both VMs and Containers

Infrastructure as code – entire infrastructure and configuration should be easily

represented in a text file.

Orchestration software – use infrastructure as code to spin up environments for specific

workload requirements

Version Controlled – infrastructure configuration should be checked in to version control

system for easy repurposing, roll back and validation

Out-of-band hardware monitoring – also known as lights out management runs

independent of the software/OS/hypervisor running on the hardware

Software defined storage – aggregate storage from different systems into a single pool

of storage, with automated protection, tiering and load balancing.

Software defined networking – spinning up a workload should automatically configure

networking, firewalls, intrusion detection, application layer gateways, mirroring, load

balancing, content distribution network registration, certificates and so forth.

Intelligent monitoring – with predictive analytics which would determine when resources

would need expanding and when they are likely to fail

Cloud integration – integrate with several public clouds, move workloads seamlessly

between the on-premise physical resources and cloud resources

Autobursting – create resources in cloud automatically, based on monitoring and usage

data.

App Store – integrated marketplace that supports both enterprise and user created

software, templates, integration modules and plugins.

Secure identity service – that works with on-premise private infrastructure and cloud

services

At the core of this infrastructure is the Elastic virtual resource pool, which may be implemented

as a service in Hybrid Hypervisor or run independently in the form of lights out management. It

pools all the physical resources into pools of virtual resources. These virtual pools can be used

by the Hypervisor to create virtual machines or containers. Adding new hardware will

automatically report its hardware resources and join the virtual pools.

2017 Dell EMC Proven Professional Knowledge Sharing 10

In an environment like this, where everything is connected, triggers can be configured so that it

has the desired and quantified effect. For example, if the application monitoring system, shown

at the top of Figure 1, sees a bottleneck in the compute resources for a deployed application, it

can talk to the orchestration software that is built using configuration management system. New

compute resources from the virtual resource pool can be selected and added to the application

environment based on the configuration information in configuration management system.

Similarly, since every resource is configured based on infrastructure as code which is version

controlled at the same time, any misconfiguration or error can be easily rolled back to the best

known version. There can be hundreds of different versions of different environments on the

same physical infrastructure and the deployment of each environment is just an instruction

away.

Conclusion

How can this be considered an endgame to infrastructure evolution? Because this kind of set up

directly correlates to the continuous development seen in today’s software development

lifecycle. Organizations want to deploy new versions of software every day with the latest and

greatest features; failing to do so leads to losing out to the competition. And this kind of hyper-

connected infrastructure which is solely defined by software, built on low-cost commodity

hardware and which sees every infrastructure as an extension for software is all that is needed.

There are numerous types of software products; however, every kind of software product needs

just three components – compute, storage and networking. The world has tried disparate

infrastructure models to provide these three basic components through different approaches.

However, the crux of this kind of infrastructure is software itself, and that is why this kind of

Hyper-Connected infrastructure is the Endgame of IT Infrastructure.

2017 Dell EMC Proven Professional Knowledge Sharing 11

Dell EMC believes the information in this publication is accurate as of its publication date. The

information is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” DELL EMC MAKES NO

RESPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE

INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED

WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Use, copying and distribution of any Dell EMC software described in this publication requires an

applicable software license.

Dell, EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries.