18
Enabling Cloud Adoption Addressing the challenges of multi-cloud

Enabling Cloud Adoption - Cloud Object Storage | Store ... Consistent Approach to Provision, Secure, Connect, and Run Any Infrastructure for Any Application At HashiCorp, we believe

Embed Size (px)

Citation preview

Enabling Cloud Adoption Addressing the challenges of multi-cloud

Organizations of all sizes are adopting cloud for application workloads. These organizations are looking to avoid the costs of running and managing their data centers or, more often, to accelerate the application delivery process. Using cloud enables development teams to operate with a much greater degree of independence from the underlying operational constraints of infrastructure.

While early cloud adoption was largely about building new applications on Amazon Web Services (AWS), today it is clear that most enterprises will rely on multiple cloud providers in addition to their private infrastructure.  The investments made by Microsoft Azure, Google Cloud Platform, Oracle Cloud, IBM Cloud, and Alibaba Cloud provide compelling infrastructure platforms, each with unique value propositions for new and existing workloads.

For most organizations, this means navigating the transition from a relatively static pool of homogeneous infrastructure in dedicated data centers to a distributed “fleet” of servers spanning one or more cloud providers.

2

Introduction

The primary challenge of cloud adoption is heterogeneity: how can operations, security, and development teams apply a consistent approach to provision, secure, connect, and run this infrastructure efficiently?

The Challenge

GCPAzureAWSPrivate Cloud

Traditional Datacenter Hybrid Datacenter

3

Essential Elements of Infrastructure

We believe a practical place to begin is by deconstructing the layers of infrastructure software that most organizations use today and then mapping those to a multi-cloud world.

We can simplify the traditional infrastructure software stack into three essential layers:

• Core infrastructure includes the operating system and management software that interacts with the physical storage, compute, and networking, and provides the core compute capacity for your applications. Typically, operations or system administrators provision and manage this layer.

• Application platform includes databases, web servers, message queues, and other components required to run an application or service. This software is the essential application runtime layer with which developers interact.

• Security layer typically focuses on boundary security through the use of networking to establish a perimeter firewall to protect the infrastructure and application platforms. Usually, a dedicated security team implements and manages this layer.

These are the layers of infrastructure necessary to run any applications, with each aligned to a single role: operations, development, and security professionals.

Unique Challenges of Cloud

The adoption of cloud exposes unique challenges to each of these three layers.

1. Core infrastructure scale and heterogeneity

In the traditional data center, a constrained number of servers are available to the operations team.  And thanks to the power of virtualization, it is also largely homogeneous: operators provision compute capacity across this pool upon which applications are deployed.

4

By contrast, the scale of cloud infrastructure is essentially infinite: the server fleet available really has no practical limit since the cloud providers operate at enormous scale.  It is for this reason that cloud providers expose access to their services as APIs: thereby allowing for the use of “infrastructure as code” approaches that enable users to codify topologies and express them in a repeatable manner.

And each cloud provider provides a unique inventory of available infrastructure services: a VM on AWS, for example, is subtly different from a VM on Azure or GCP.  This heterogeneity is what gives each cloud its richness but also introduces tremendous complexity for operations professionals who must learn the idiosyncrasies of provisioning each infrastructure type.

The challenge for operations teams moving to cloud is to enable the automation through infrastructure as code while embracing the inevitable heterogeneity of different cloud providers.

2. Application platform diversity

Development teams always value the ability to use the most appropriate technology for the needs of a particular application—and for that reason every large organization inevitably supports an array of languages and runtime technologies.

Today we have had an explosion of choice as the traditional options of Java and .Net applications have expanded to include technologies that are closer to Cloud Native.  

In recent years, containers have grown for new applications because they are particularly well-suited to the highly dynamic web applications that are common today.

5

Cloud service providers today offer many of these runtime technologies natively but have also increased the heterogeneity even further by offering their own platform technologies—Lambda functions on AWS, Google Functions on GCP, Azure Blob stores for storage, for example—which have no direct correlation to on-premises runtime technologies.

The challenge for architects then is to accommodate the necessary diversity of application platforms across a distributed fleet.

3. Security teams lack an effective network perimeter

The traditional data center had ‘four walls and a pipe’ and a clear network perimeter. Anyone inside the network is assumed to be authorized to access the infrastructure. Firewalls serve as bulkheads between front-end, user-facing applications and backend databases. IP addresses are generally static, which allow security professionals to provide additional constraints on application interactions based on IP address.

However, a cloud doesn’t have a distinct perimeter, and with multi-cloud, that surface area expands exponentially.  And because the network topology is software-defined, any server can become Internet-facing with a few API calls.

This lack of control over network topologies makes it hard to force all traffic through security or compliance tools.  Infrastructure may also span multiple sites, meaning there isn’t a single ingress point to allow secured traffic to flow into a network.  And the decomposition of monolithic applications into highly ephemeral microservices means that IP addresses are highly dynamic, rendering IP-based security inappropriate for many scenarios.

The challenge for security professionals is to rethink the ‘castle & moat’ approach to perimeter-based security and reconsider security holistically across the dynamic, distributed fleet.

A Consistent Approach to Provision, Secure, Connect, and Run Any Infrastructure for Any Application

At HashiCorp, we believe organizations can address these challenges of cloud adoption with tools that provide a consistent workflow to a single, well-scoped concern at each layer of the infrastructure stack.  This focus on workflows over technologies allows underlying technologies to change, while the workflow for each part of the organization does not.  As a result, organizations simplify challenges related to diversity of technology.  

More specifically, we believe that successful cloud adoption begins with a separation of concerns: identify the individual challenges for operations, security, and development teams at the corresponding infrastructure, security, and application platform layers and then identify an appropriate technological and organizational blueprint.

Provision Cloud Infrastructure

The specific types of core infrastructure available on each provider vary but are conceptually similar: they provide the underlying core compute capacity that will be required by the applications and provisioned before all else.

Access to these infrastructure resources is made available programmatically and exposed through a native tool: for example, Cloud Formation on AWS, Azure Resource Manager on Azure, or Google Cloud Deployment Manager on GCP.  

6

Cloud Infrastructure Defined

For enterprises, the challenge is embracing the unique capabilities of each cloud platform without having to become an expert in the nuances of each platform-specific provisioning tool.  IT operations teams need to provide some constraints while maintaining the benefits of self-service infrastructure that makes cloud so compelling.  

Therefore, the primary concerns for provisioning are:

1. Representing Infrastructure as code: using infrastructure as code provides a way to provision infrastructure at scale and provide infrastructure templates that can be reused by other teams.

2. Embracing heterogeneity: providing operators with a consistent workflow to provision infrastructure regardless of cloud provider—and without losing access to the full capabilities of each cloud. This eliminates the need to learn cloud-specific provisioning tools.  

3. Managing dependencies: infrastructure blueprints necessary include many dependencies to included elements that are not available natively as cloud services – such as CDN or Monitoring tools that must be incorporated in every image.  The ability to incorporate these in the provisioning process is a prerequisite for most provisioning approaches.

Organizations that automate the provisioning of infrastructure across any platform are best able to navigate the transition to multi-cloud.

Core infrastructure provides the compute, storage, and networking resources

7

Run Applications in Cloud

There will always be diversity in the application platform layer, as different development teams embrace different tools and architectures depending on the application type.   Some teams will simply bring their own combination of middleware—app servers, databases, messaging technologies—packaged in VMs, containers, or even native physical format.  Others will incorporate cloud-native services uniquely available on a particular cloud: on AWS this might include Lambda functions and cloud native stores such as RDS for example.  

The architectural goal when considering the realities of multi-cloud is to enable this diversity across one or more infrastructure providers.  Therefore, two of the primary requirements are:

1. Separation of concerns: how to separate developers from detailed knowledge of the underlying infrastructure so that they can focus on building the application without needing to consider where that application might physically run.

2. Efficient resource utilization: how to schedule resource consumption across a heterogeneous fleet of servers and application types (containers, virtual machiness, JAR files, etc.) to use all servers efficiently.

Organizations addressing these two concerns can then consider the fleet of servers as an available pool of resources that is essentially a single large data center.

A consistent approach to accommodate heterogeneity across the application layer

8

Secure Cloud Infrastructure

The most obvious need in the transition to multi-cloud is a consideration of the security implications given the hybrid, distributed, and dynamic nature.  The use of containers, which are highly ephemeral in nature, accentuates this and therefore poses a unique security challenge.  Rather than attempting to recreate the traditional ‘castle & moat’ approach, security professionals typically focus on addressing the following core requirements:

1. Distributed secrets management: application-specific secrets such as database usernames and passwords can become exposed given the lack of network perimeter.  Providing a mechanism for operations and development teams to manage and rotate distributed secrets is a much larger issue in this environment and paramount.

2. Encryption of data in flight and at rest: traffic between application components that might reside on different providers and even geographies must be encrypted.

3. Identity management: authenticating identity between application components through the use of expiring tokens, for example, that provide assurance of identity.

Addressing these challenges are fundamental for security professionals looking to be enablers of cloud adoption.

A consistent approach to security across distributed infrastructure

9

Connect Any Application Across Any Infrastructure

The dynamic nature of the cloud means that knowing where infrastructure and application components reside at any given time is challenging.  We previously relied on the static nature of infrastructure to allow users to discover and interact with infrastructure via hard-coded addresses or using internal load balancers. However, with the dynamic, API-driven nature of the cloud, those techniques are no longer available.

Instead, a core requirement of the cloud model is a common backbone in the form of a dynamic registry that describes where services and infrastructure components are running and allow hardware failures to be masked and mitigated. It also enables elasticity, without hard-coding or having a large portion of the infrastructure tasked with load balancing. This common backbone needs to:

1. Dynamically discover services: Developers need to discover and register application services required by their applications on the network.

2. Describe real-time configuration: Operators need the ability to discover and update infrastructure components—or example, updating the settings on every load balancer on the network—to keep the infrastructure healthy.

A common backbone that connects cloud infrastructure provides the linkages so developers can run applications and operators can find the status of the infrastructure at any time.  

Connecting distributed infrastructure with complex network topologies

10

11

HashiCorp enables organizations to provision, secure, connect, and run any infrastructure for any application

Cloud Infrastructure Delivered

RUN

SECURE

PROV IS ION

CONNECT

At HashiCorp, we provide a suite of products that form the blueprint for organizations to adopt any cloud.

Each tool addresses a focused concern for the technical and organizational challenges of infrastructure automation. This means tools can be adopted one at a time or all together. We do this by enabling consistent workflows—not technologies. This enables customers to follow the same approach across their private data centers and their cloud environments.

12

Terraform is a tool to provision any infrastructure using a consistent workflow through the application of infrastructure as code.  There are two parts to Terraform’s extensible architecture: Terraform Core, which is the core of the product, and then a series of providers—plug-ins to support cloud types such as AWS, GCP, Azure, and vSphere.  In this way, a user can adopt a common provisioning workflow and then apply that to any infrastructure type.

In addition to support for provisioning on the major cloud providers, Terraform supports more than 70 infrastructure types (each with their own Terraform Provider), and 1000 unique resource types. The open source nature of the Providers makes it easy for anyone to contribute and improve providers as the infrastructure vendors add new capabilities.

Operators codify infrastructure in the form of Terraform templates, which typically combine infrastructure types (for example, Fastly configured in AWS).  By applying the infrastructure as code concept, operators can collaborate and share these templates in GitHub (or other version control systems) and follow the same principles that software developers use to collaborate on code.

Terraform

LEARN MORE ABOUT TERRAFORM

13

Terraform allows a small number of operators to produce approved templates that can be consumed by developers. This producer/consumer relationship is a key ingredient to unlocking the organizational challenge of cloud adoption because it reduces the friction and bottlenecks of infrastructure provisioning.

Provisioning requires infrastructure-specific images, as there is no common packaging format for virtual machines across providers. HashiCorp Packer enables operators to build many machine image types from a single source. A Terraform configuration can reference these images to provision infrastructure using the cloud specific images created by Packer.

14

Vault solves the challenge of security for distributed application infrastructure. It provides multiple layers of security that are independent of the network.

Vault provides secrets management, encryption as a service, and privilege access management. Security operators use Vault to manage secrets centrally—e.g., private encryption keys, API tokens, and database credentials. Vault will store and manage the distribution of those secrets to applications and end users.

Security teams use a common Vault interface to manage secrets. Management tasks include password changes, credential rotation, and policy updates.

Vault encrypts data at rest and in transit. Its extensible architecture provides support for many types of storage and authentication systems. Its policy support provides granular access control between human and server or between server and server.

Vault is highly available within each data center and also provides replication across many data centers for enterprise users.

LEARN MORE ABOUT VAULT

15

Nomad is a multi-datacenter-aware cluster manager and scheduler. It provides a consistent approach for deploying any application. This includes batch, dispatch, and long-running services:

• Batch workloads include big data applications that need jobs to complete quickly.

• Dispatch workloads include short-lived, elastic applications.

• Long-running services need secure and highly available data centers.

Developers codify the requirements for applications to run in a declarative configuration file. Nomad uses this file to place the application across a fleet of machines. This could include single cloud, span geographic cloud regions, or multiple clouds.

Infrastructure operators provision the fleet of machines, whereas developers use Nomad to handle the application deployment across machines. In this way, we decouple infrastructure provisioning from application deployment.

LEARN MORE ABOUT NOMAD

16

Consul provides a common backbone across hybrid infrastructure. It provides service discovery, monitoring, application configuration, and support for multi-datacenter networking topologies. It creates an automatic central registry—the single source of truth for infrastructure.

For example, a web server can use Consul to discover its upstream database or API services. While an application is running, Consul can monitor and flag degraded instances, while directing traffic to healthy instances and notifying developers or operators for any issues.

Real-time service discovery allows development teams to avoid hard-coding network addresses. Instead, Consul pushes the discovery of other services into the application runtime. A running service broadcasts its availability, and can then be easily reached by other applications.

LEARN MORE ABOUT CONSUL

Accelerating Cloud Adoption

We’ve described a consistent toolset to empower operators and developers to provision, secure, connect, and run any infrastructure for any application.

It’s important for organizations to be able to quickly and efficiently run applications and infrastructure on the cloud best suited for their needs, while still retaining flexibility in their choice as applications and cloud offerings evolve.

This is the fundamental purpose of the HashiCorp suite:  to provide customers with the infrastructure automation capabilities they need as they move to cloud. The ‘lego piece’ approach of HashiCorp allows organizations to incrementally adopt the tooling they need and integrate with their existing systems.

HASHICORP SUITE

Updated: 06/29/17