18
Run naked and save the whales Why it’s (past) time to run containers on bare metal CTO [email protected] Bryan Cantrill @bcantrill

Why it’s (past) time to run containers on bare metal

Embed Size (px)

Citation preview

Run naked and save the whalesWhy it’s (past) time to runcontainers on bare metal

CTO

[email protected]

Bryan Cantrill

@bcantrill

Container prehistory

• Containers are not a new idea, having originated via filesystem containers with chroot in Seventh Edition Unix

• chroot originated with Bill Joy, but specifics are blurry; according to Kirk McKusick, via Poul-Henning Kamp and Robert Watson:

Container history

• Seeking to provide a security mechanism, FreeBSD extended chroot into jails:

• To provide workload consolidation, Sun introduced complete operating system virtualization with zones (née Project Kevlar)

Container history

Container limitations

• The (prioritized) design constraints for OS-based virtualization as originally articulated by zones: Security, Isolation, Virtualization, Granularity, Transparency

• Not among these: running foreign binaries or emulating other operating systems!

• Despite its advantages in terms of tenancy and performance, OS-based virtualization didn’t fit the problem ca. early 2000s: needed the ability to consolidate entire stacks (i.e. Windows)

Hardware-level virtualization

• Since the 1960s, the preferred approach for operating legacy stacks unmodified has been to virtualize the hardware

• A virtual machine is presented upon which each tenant runs an operating system that they choose (but must also manage)

• Effective for running legacy stacks, but with a clear inefficiency: there are as many operating systems on a machine as tenants:

• Operating systems are heavy and don’t play well with others with respect to resources like DRAM, CPU, I/O devices, etc.!

• Still, hardware-level virtualization became de facto in the cloud

Containers at Joyent

• Joyent runs OS containers in the cloud via SmartOS — and we have run containers in multi-tenant production since ~2006

• Adding support for hardware-based virtualization circa 2011 strengthened our resolve with respect to OS-based virtualization

• OS containers are lightweight and efficient — which is especially important as services become smaller and more numerous: overhead and latency become increasingly important!

•We emphasized their operational characteristics — performance, elasticity, tenancy — and for many years, we were a lone voice...

Containers as PaaS foundation?

• Some saw the power of OS containers to facilitate up-stack platform-as-a-service abstractions

• For example, dotCloud — a platform-as-a-service provider — built their PaaS on OS containers

• Struggling as a PaaS, dotCloud pivoted — and open sourced their container-based orchestration layer...

...and Docker was born

Docker revolution

• Docker has used the rapid provisioning + shared underlying filesystem of containers to allow developers to think operationally

• Developers can encode deployment procedures via an image

• Images can be reliably and reproducibly deployed as a container

• Images can be quickly deployed — and re-deployed

• Docker complements the library ethos of microservices

• Docker facilitates developer — and business! — agility

Broader container revolution

• The Docker model has pointed to the future of containers

• Docker’s challenges today are largely operational: network virtualization, persistence, security, etc.

• Security concerns are not due to Docker per se, but rather to the architectural limitations of the Linux “container” substrate

• For multi-tenancy, state-of-the-art for Docker containers is to run in hardware virtual machines as Docker hosts (!!)

• Deploying OS containers via Docker hosts in hardware virtual machines negates their economic advantage!

Containers and the Jevons paradox

• The Jevons paradox seems very likely to hold for containers: greater efficiency will result in a net increase in consumption!

• Efficiency gains from containers are in terms of developer time...

• ...but requiring containers to be scheduled in VMs induces operational inefficiencies: every operator must now think like a cloud operator — maximizing density within fixed-cost VMs

• Greater consumption + operational inefficiencies threaten to slow the container revolution — or make it explosive in terms of cost

• To realize the full economic promise of the container revolution, we need container-native infrastructure!

Container-native infrastructure?

• SmartOS has been container-native since its inception — and running in multi-tenant, internet-facing production for many years

• Could we achieve an ideal world that combines the development model of Docker with the container-native model of SmartOS?

• This would be the best of all worlds: agility of Docker coupled with production-proven security and on-the-metal performance of SmartOS containers

• To effect this, we implemented a Linux system call table for SmartOS and the Docker Remote API for SmartDataCenter, our (open source!) cloud orchestration software

Triton: Docker + SmartOS

• In March, we launched Triton, which combines SmartOS and SmartDataCenter with our Docker Remote API endpoint

•With Triton, the notion of a Docker host is virtualized: to the Docker client, the datacenter is a large Docker host

• One never allocates VMs with Triton; one only allocates containers — and all Triton containers run directly on-the-metal

• All of the components to Triton are open source: you can download and install SmartDataCenter and run it yourself

• Generally available on the Joyent Public Cloud since June...

Experiences with Triton

•We went into Triton assuming that its appeal would be that containers are running on the metal…

• ...but the big win of Triton seems to be its simplicity: eliminating the virtual machine allows for a switch from the allocation mindset to the consumption mindset

• Ease of container creation now brings other production concerns into stark relief, e.g. service composition and discovery

•While Docker has become the de facto standard for the container format, there isn’t (yet) consensus in these higher-level areas...

Upstack mayhem

Upstack mayhem

• The upstack ramifications are entirely unclear — with many rival frameworks and approaches

• The rival frameworks are all open source:

• Unlikely to be winner-take-all

• Productive mutation is not just possible but highly likely

•We are not yet at Peak Confusion in the container space!

• Complexity is the enemy — and approaches that aren’t container-native are likely to be complexity-additive

Future of containers

• For nearly a decade, we have believed that OS-virtualized containers represent the future of computing — and with the rise of Docker, this is no longer controversial

• But to achieve the full promise of containers, they must run directly on-the-metal — multi-tenant security is a constraint!

• The virtual machine is a vestigial abstraction; we must reject container-based infrastructure that implicitly assumes it

• Triton represents our belief that containers needn’t compromise: multi-tenant security, operational elasticity and on-the-metal performance!