141
Mantl Documentation Release 1.0.3 Cisco Systems, Incorporated November 03, 2016

Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

  • Upload
    others

  • View
    14

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl DocumentationRelease 1.0.3

Cisco Systems, Incorporated

November 03, 2016

Page 2: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done
Page 3: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Contents

1 Getting Started 3

2 General Information about Mantl with Ansible 52.1 Preparing to provision Cloud Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.2 Deploying software via Ansible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492.3 Checking your deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

3 Components 793.1 Calico . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 793.2 Chronos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 813.3 Collectd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 853.4 Consul . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 863.5 Distributive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 873.6 dnsmasq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 873.7 Docker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 883.8 ELK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 893.9 etcd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 953.10 GlusterFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 953.11 Kafka . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 973.12 Kubernetes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1003.13 Logstash . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1013.14 Marathon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1023.15 Mesos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1043.16 Traefik . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1063.17 ZooKeeper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1073.18 Common . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1083.19 consul-template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1083.20 logrotate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1093.21 Nginx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1093.22 Vault . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

4 Addons 111

5 Security 1135.1 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1135.2 the security-setup script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

6 Upgrading 1196.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

i

Page 4: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

6.2 Upgrading OS packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1196.3 Upgrading from 1.1 to 1.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1196.4 Upgrading from 1.0.3 to 1.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1206.5 Upgrading from 0.5.1 to 1.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1206.6 Upgrading from 1.1 to 1.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

7 Packer 1237.1 ansible.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1237.2 vagrant.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1237.3 vbox.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1237.4 cleanup.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

8 FAQs 1258.1 What is the relationship between Mantl and OpenStack Magnum? . . . . . . . . . . . . . . . . . . . 1258.2 Can I use Mantl with Kubernetes? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1258.3 Containers are great for running stateless applications but what about data/stateful services? . . . . . 125

9 Licenses 1279.1 Listing of Licenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

10 License 133

ii

Page 5: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

1 Mantl is a modern platform for rapidly deploying globally distributed services. Please see the README2 for a highlevel overview.

Contents:

1https://gitter.im/CiscoCloud/mantl2https://github.com/CiscoCloud/mantl/blob/master/README.md

Contents 1

Page 6: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

2 Contents

Page 7: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

CHAPTER 1

Getting Started

Note: This document assumes you have a working Ansible installation1. If you don’t, install Ansible before continu-ing. This can be done simply by running pip install -r requirements.txt from the root of the project.

It also assumes you have a working Terraform installation. You can download Terraform from Terraform downloads2.

1http://docs.ansible.com/intro_installation.html#installing-the-control-machine2https://www.terraform.io/downloads.html

3

Page 8: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

4 Chapter 1. Getting Started

Page 9: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

CHAPTER 2

General Information about Mantl with Ansible

The Mantl project uses Ansible to bring up nodes and clusters. This generally means that you need three things:

1. Hosts to use as the base for your cluster

2. An inventory file1 with the hosts you want to be modified. Mantl includes an Ansible inventory for terraformedhosts, terraform.py2.

3. A playbook matching Ansible roles to hosts. Mantl organizes its components in sample.yml3, which we recom-mend copying to mantl.yml for the possibility of later customization. You can read more about playbooks4

in the Ansible docs5.

2.1 Preparing to provision Cloud Hosts

The playbooks and roles in this project will work on whatever provider (or bare metal) you care to spin up, as long asit can run CentOS 7 or equivalent.

Your hosts will have to be accessible with your SSH key. If you’re unfamiliar with SSH keys, please read DigitalO-cean’s guide to setting up SSH keys6.

Here are some guides specific to each of the platforms that Mantl supports:

2.1.1 OpenStack

Mantl uses Terraform to provision hosts in OpenStack. You can download Terraform from terraform.io7.

This project provides a number of playbooks designed for doing host maintenance tasks on OpenStack hosts. You canfind them in playbooks/ in the main project directory.

Configuring OpenStack authentication

Before we can build any servers using Terraform and Ansible, we need to configure authentication. We’ll be fillingin the authentication variables for the template located at terraform/openstack-modules.sample.tf. Itlooks like this:

1http://docs.ansible.com/intro_inventory.html2https://github.com/CiscoCloud/mantl/blob/master/plugins/inventory/terraform.py3https://github.com/CiscoCloud/mantl/blob/master/sample.yml4http://docs.ansible.com/ansible/playbooks.html5http://docs.ansible.com/ansible/6https://www.digitalocean.com/community/tutorials/how-to-set-up-ssh-keys–27https://www.terraform.io/downloads.html

5

Page 10: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

variable subnet_cidr { default = "10.0.0.0/24" }variable public_key { default = "/home/you/.ssh/id_rsa.pub" }variable ssh_user { default = "cloud-user" }

variable name { default = "mantl" } # resources will start with "mantl-"variable control_count { default = "3"} # mesos masters, zk leaders, consul serversvariable worker_count { default = "5"} # worker nodesvariable kubeworker_count { default = "2"} # kubeworker nodesvariable edge_count { default = "2"} # load balancer nodes

# Run 'nova network-list' to get these names and values# Floating ips are optionalvariable external_network_uuid { default = "uuid-of-your-external-network" }variable floating_ip_pool { default = "name-of-your-floating-ip-pool" }

# Run 'nova image-list' to get your image namevariable image_name { default = "your-CentOS-7" }

# DNS servers passed to Openstack subnetvariable dns_nameservers { default = "" } # comma separated list of ips, e.g. "8.8.8.8,8.8.4.4"

# Openstack flavors control the size of the instance, i.e. m1.xlarge.# Run 'nova flavor-list' to list the flavors in your environment# Below are typical settings for mantlvariable control_flavor_name { default = "your-XLarge" }variable worker_flavor_name { default = "your-Large" }variable kubeworker_flavor_name { default = "your-Large" }variable edge_flavor_name { default = "your-Small" }

# Size of the volumesvariable control_volume_size { default = "50" }variable worker_volume_size { default = "100" }variable edge_volume_size { default = "20" }

module "ssh-key" {source = "./terraform/openstack/keypair_v2"public_key = "${var.public_key}"keypair_name = "mantl-key"

}

#Create a network with an externally attached routermodule "network" {

source = "./terraform/openstack/network"external_net_uuid = "${var.external_network_uuid}"subnet_cidr = "${var.subnet_cidr}"name = "${var.name}"dns_nameservers = "${var.dns_nameservers}"

}

# Create floating IPs for each of the roles# These are not required if your network is exposed to the internet# or you don't want floating ips for the instances.module "floating-ips-control" {

source = "./terraform/openstack/floating-ip"count = "${var.control_count}"floating_pool = "${var.floating_ip_pool}"

}

6 Chapter 2. General Information about Mantl with Ansible

Page 11: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

module "floating-ips-worker" {source = "./terraform/openstack/floating-ip"count = "${var.worker_count}"floating_pool = "${var.floating_ip_pool}"

}

module "floating-ips-kubeworker" {source = "./terraform/openstack/floating-ip"count = "${var.kubeworker_count}"floating_pool = "${var.floating_ip_pool}"

}

module "floating-ips-edge" {source = "./terraform/openstack/floating-ip"count = "${var.edge_count}"floating_pool = "${var.floating_ip_pool}"

}

# Create instances for each of the rolesmodule "instances-control" {

source = "./terraform/openstack/instance"name = "${var.name}"count = "${var.control_count}"role = "control"volume_size = "${var.control_volume_size}"network_uuid = "${module.network.network_uuid}"floating_ips = "${module.floating-ips-control.ip_list}"keypair_name = "${module.ssh-key.keypair_name}"flavor_name = "${var.control_flavor_name}"image_name = "${var.image_name}"ssh_user = "${var.ssh_user}"

}

module "instances-worker" {source = "./terraform/openstack/instance"name = "${var.name}"count = "${var.worker_count}"volume_size = "${var.worker_volume_size}"count_format = "%03d"role = "worker"network_uuid = "${module.network.network_uuid}"floating_ips = "${module.floating-ips-worker.ip_list}"keypair_name = "${module.ssh-key.keypair_name}"flavor_name = "${var.worker_flavor_name}"image_name = "${var.image_name}"ssh_user = "${var.ssh_user}"

}

module "instances-kubeworker" {source = "./terraform/openstack/instance"name = "${var.name}"count = "${var.kubeworker_count}"volume_size = "100"count_format = "%03d"role = "kubeworker"network_uuid = "${module.network.network_uuid}"floating_ips = "${module.floating-ips-kubeworker.ip_list}"keypair_name = "${module.ssh-key.keypair_name}"

2.1. Preparing to provision Cloud Hosts 7

Page 12: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

flavor_name = "${var.kubeworker_flavor_name}"image_name = "${var.image_name}"ssh_user = "${var.ssh_user}"

}

module "instances-edge" {source = "./terraform/openstack/instance"name = "${var.name}"count = "${var.edge_count}"volume_size = "${var.edge_volume_size}"count_format = "%02d"role = "edge"network_uuid = "${module.network.network_uuid}"floating_ips = "${module.floating-ips-edge.ip_list}"keypair_name = "${module.ssh-key.keypair_name}"flavor_name = "${var.edge_flavor_name}"image_name = "${var.image_name}"ssh_user = "${var.ssh_user}"

}

Copy that file in it’s entirety to the root of the project to start customization. NOTE: All configuration entries needs tobe completed. In the next sections, we’ll explain how to obtain these settings.

You can also use this file as a base for further customization. For example, you can change the names of the modulesto be specific to your environment. While we will explore the authentication variables in the next sections, you willneed to provide the region, flavor_name, and other such variables yourself. You can get these variables fromthe OpenStack command line tools. For example:

• glance image-list for image_name

• keystone tenant-list for tenant_id and tenant_name

• nova flavor-list for control_flavor_name and worker_flavor_name

Or use the appropriate OpenStack commands such as openstack project list or the commands below.

• openstack image list for image_name

• openstack network list for net_id

• openstack flavor list for control_flavor_name / worker_flavor_name

Generate SSH keys

If you do not have ssh keys already, generate a new pair for use with the project. You need to add the path to this key(public_key) to the openstack.tf file.

ssh-keygen -t rsa -f /path/to/project/sshkey -C "sshkey"

Getting OpenStack tenant settings

auth_url, tenant_name, and tenant_id are unique for each OpenStack datacenter. You can get these fromthe OpenStack web console:

1. Log Into the OpenStack web console and in the Manage Compute section, select “Access & Security”.

2. Select the “API Access” tab.

3. Click on the “Download the OpenStack RC File” button. We’ll use this file to set up authentication.

8 Chapter 2. General Information about Mantl with Ansible

Page 13: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

4. Download the RC file for each Data Center you want to provision servers in. You may have to log into differentOpenStack web consoles.

Open the file that you just downloaded. We are interested in three of the environment variables that are exported:

export OS_AUTH_URL=https://my.openstack.com:5000/v2.0export OS_TENANT_ID=my-long-unique-idexport OS_TENANT_NAME="my-project"

Update your Terraform file with these values for the appropriate fields, and save the downloaded file for using themaintenance playbooks (you’ll just need to source the environment variables into your shell.)

OpenStack Security Group

In order for terraform to apply correctly, you need to create a security group in openstack for Mantl.

You can either login to the Web UI to perform this task or use the openstack commmand line interface as below.

openstack security group create <group_name>

Once your group is created, ensure you update the openstack.tf file accordingly.

OpenStack Username/Password

The playbooks get Username/Password information via environment variables:

OS_USERNAMEYour OpenStack username

OS_PASSWORDYour OpenStack password

Before running terraform or any playbooks, run the following command to to pull in your username and password forAnsible to use, changing the file name and location to the location of your OpenStack RC file:

source ~/Downloads/my-project.rc

2.1. Preparing to provision Cloud Hosts 9

Page 14: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

Note: The default OpenStack RC file will prompt for your password in order to set OS_PASSWORD.

Once you’re all set up there, run terraform get to prepare Terraform to provision your cluster, terraformplan to see what will be created, and terraform apply to provision the cluster. Afterwards, you can use theinstructions in getting started to install Mantl on your new cluster.

2.1.2 Google Compute Engine

New in version 1.0: multi-zone support and terraform modularization

As of Mantl 0.3 you can bring up Google Compute Engine environments using Terraform. Mantl uses Terraform toprovision hosts. You can download Terraform from terraform.io8.

Configuring Google Compute Engine for Terraform

Before we can build any servers using Terraform and Ansible, we need to configure authentication. We’ll be filling inthe authentication variables for the template located at terraform/gce.sample.tf. The beginning of it lookslike this:

variable "control_count" { default = 3 }variable "datacenter" {default = "gce"}variable "edge_count" { default = 3}variable "image" {default = "centos-7-v20150526"}variable "long_name" {default = "mantl"}variable "short_name" {default = "mi"}variable "ssh_key" {default = "~/.ssh/id_rsa.pub"}variable "ssh_user" {default = "centos"}variable "worker_count" {default = 1}variable "zones" {

default = "us-central1-a,us-central1-b"}

provider "google" {account_file = ""credentials = "${file("account.json")}"project = "mantl-0000"region = "us-central1"

}

Copy that file in it’s entirety to the root of the project as gce.tf to start customization. In the next sections, we’llexplain how to obtain these settings.

Basic Settings

project, region and zones are unique values for each project in Google Compute Engine. project is availablefrom the project overview page (use the Project ID not the Project Name.) You can select which region and zones youwant to use from any of the GCE zones (see the image below.) If you’re in the United States, (region) us-central1 and(zones) us-central1-a,us-central1-b,us-central1-c are good choices. If you’re in Europe, europe-west1 and europe-west1-b,europe-west1-c might be your best bets. If you haven’t previously activated Compute Engine for your project,this is a good time to do it.

8https://www.terraform.io/downloads.html

10 Chapter 2. General Information about Mantl with Ansible

Page 15: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

If you don’t want to commit these values in a file, you can source them from the environment instead:

GOOGLE_PROJECTThe ID of a project to apply resources to.

GOOGLE_REGIONThe region to operate under.

image is the GCE image to use for your cluster instances. You can find image names under Images in the ComputeEngine section of the GCP console.

ssh_username is the default user name for SSH access to your cluster hosts. This value will be dependent on theimage that you use. Common values are centos or rhel.

datacenter is a name to identify your datacenter, this is important if you have more than one datacenter.

short_name is appended to the name tag and dns (if used) of each of the nodes to help better identify them.

control_count, edge_count and worker_count are the number of GCE instances that will get deployed foreach node type.

control_type, edge_type and worker_type are used to specify the GCE machine type9 .

9https://cloud.google.com/compute/docs/machine-types/

2.1. Preparing to provision Cloud Hosts 11

Page 16: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

account.json

Terraform also needs service account to be able to create and manage resources in your project. You can create one bygoing to the “Credentials” screen under “API Manager” in the GCP Product and Services menu. Service accounts arecreated under New credentials -> Service account key.

Note: You’ll need to be an account owner to create this file - if you’re not, ask your account owner to do this step foryou.

You will either need to create an new service account or use an exisiting one. For this example we created one calledterraform.

12 Chapter 2. General Information about Mantl with Ansible

Page 17: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

Once you’ve created your account, your browser will download a JSON file containing the credentials. Pointcredentials to the path you decide to store that file in. If you’re running Terraform from a Google Computeinstance with an associated service account, you may leave the credentials parameter blank.

Provisioning

Once you’re all set up with the provider, customize your modules (for control_count, edge_count andworker_count). Make sure your local ssh-agent is running and your ssh key has been added, this is requriredby the Terraform provisioner. Run ssh-add ~/.ssh/id_rsa to add your ssh key. Run terraform getto prepare Terraform to provision your cluster, terraform plan to see what will be created, and terraformapply to provision the cluster. Afterwards, you can use the instructions in getting started to install Mantl on yournew cluster.

Note: If you get the below when running terraform plan or apply, you will need to add : account_file = "" tothe provider section of your gce.tf file.:

provider.google.account_fileEnter a value:

This is a know bug in older versions of terraform.

2.1. Preparing to provision Cloud Hosts 13

Page 18: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

Terraform State

Terraform stores the state10 of your infrastructure in a file called “terraform.tfstate”. This file can be stored locally orin a remote11 location such as consul12. If you use the gce.sample.tf that is provided, by default the state of allthe modules are stored in local terraform.tfstate file at the root of this project.

Instead of storing the state for all the modules in one file, you might deploy the modules independently and havedifferent terraform.tfstate for each module (either locally or remote). This can help with blue/green deployments, ormaking sure you don’t accidently override more static parts of the infrastructure, such as the network.

In the gce.sample.tf we have included examples of how you would reference a remote state file for network variables.

To create terraform.tfstate locally for the network module, you would simply run terraform get,terraform plan and terraform apply in the terraform/gce/network/ directory. Then in yourgce.tf file you would want to comment out:

#module "gce-network" {# source = "./terraform/gce/network"# network_ipv4 = "10.0.0.0/16"#}

and uncomment:

resource "terraform_remote_state" "gce-network" {backend = "_local"config {path = "./terraform/gce/network/terraform.tfstate"

}}

and change all the network_name variables for the nodes to be:

network_name = "${terraform_remote_state.gce-network.output.network_name}"

Ideally you would store the state remotely, but configuring that is outside the scope of this document. The followingblog explains how to configure and use remote state, Terraform remote state13.

Configuring DNS with Google Cloud DNS

You can set up your DNS records with Terraform:

DNS

New in version 0.3.

Terraform lets you configure DNS for your instances. The DNS provider is loosely coupled from the server provider,so you could for example use the dnsimple provider for either OpenStack or AWS hosts, or use the Google Cloud DNSprovider for DigitalOcean hosts.

Providers These are the supported DNS providers:

10https://www.terraform.io/docs/state/index.html11https://www.terraform.io/docs/state/index.html12https://github.com/hashicorp/terraform/blob/master/state/remote/remote.go#L3813http://blog.mattiasgees.be/2015/07/29/terraform-remote-state/

14 Chapter 2. General Information about Mantl with Ansible

Page 19: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

CloudFlare New in version 0.5.

Terraform can use CloudFlare to provide DNS records for your cluster, independent of which provider you use toprovision your servers.

CloudFlare Username/Token The easiest way to configure credentials for CloudFlare is by setting environmentvariables:

CLOUDFLARE_EMAILYour e-mail address for the CloudFlare account

CLOUDFLARE_TOKENThe CloudFlare token (found in the CloudFlare admin panel)

Alternatively, you can set up the CloudFlare provider credentials in your .tf file:

provider "cloudflare" {email = "the e-mail address for your CloudFlare account"token = "your CloudFlare token"

}

DNSimple New in version 0.3.

Terraform can use DNSimple to provide DNS records for your cluster, independent of which provider you use toprovision your servers.

DNSimple Username/Token The easiest way to configure credentials for DNSimple is by setting environment vari-ables:

DNSIMPLE_EMAILYour e-mail address for the DNSimple account

DNSIMPLE_TOKENThe DNSimple token (found in the DNSimple admin panel)

Alternatively, you can set up the DNSimple provider credentials in your .tf file:

provider "dnsimple" {token = "your dnsimple token"email = "your e-mail address for the dnsimple account"

}

GCP Cloud DNS Terraform can use google_dns_record_set resources to provide DNS records for yourcluster.

In addition to the normal DNS variables, you will need to specify the managed_zone parameter. You can find yourManaged Zone name in the GCP Netowrking Console.

2.1. Preparing to provision Cloud Hosts 15

Page 20: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

If you haven’t set up a managed zone for the domain you’re using, you can do that with Terraform as well, just addthis extra snippet in your .tf file:

resource "google_dns_managed_zone" "managed-zone" {name = "my-managed-zone"dns_name = "example.com."description "Managed zone for example.com."

}

In your gce.tf, you will want to enable the cloud-dns module:

module "cloud-dns" {source = "./terraform/gce/dns"control_count = "${var.control_count}"control_ips = "${module.control-nodes.control_ips}"domain = "mydomain.com"edge_count = "${var.edge_count}"edge_ips = "${module.edge-nodes.edge_ips}"lb_ip = "${module.network-lb.public_ip}"managed_zone = "my-cloud-dns-zone"short_name = "${var.short_name}"subdomain = "service"worker_count = "${var.worker_count}"worker_ips = "${module.worker-nodes.worker_ips}"

}

16 Chapter 2. General Information about Mantl with Ansible

Page 21: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

Route53 Terraform can use aws_route53_record resources to provide DNS records for your cluster.

In addition to the normal DNS variables, you will need to specify the hosted_zone_id parameter. You can findyour own hosted zone ID in your AWS Route 53 console.

Route53 uses your normal Amazon Web Services provider credentials.

# Example setup for an AWS Route53module "route53" {

source = "./terraform/aws/route53/dns"control_count = "${var.control_count}"control_ips = "${module.control-nodes.ec2_ips}"domain = "my-domain.com"edge_count = "${var.edge_count}"edge_ips = "${module.edge-nodes.ec2_ips}"hosted_zone_id = "XXXXXXXXXXXX"short_name = "${var.short_name}"subdomain = ".dev"worker_count = "${var.worker_count}"worker_ips = "${module.worker-nodes.ec2_ips}"kubeworker_count = "${var.kubeworker_count}"kubeworker_ips = "${module.kubeworker-nodes.ec2_ips}"

}

DNS Records and Configuration The providers create a uniform set of DNS A records:

• {short-name}-control-{nn}.node{subdomain}.{domain}

• {short-name}-edge-{nn}.node{subdomain}.{domain}

• {short-name}-worker-{nnn}.node{subdomain}.{domain}

• {control}{subdomain}.{domain}

• *.{subdomain}.{domain}

For example, with short-name=mantl, domain=example.com, a blank subdomain, 3 control nodes, 4 workernodes, 2 Kubernetes worker nodes, and 2 edge nodes, that will give us these DNS records:

• mantl-control-01.node.example.com

• mantl-control-02.node.example.com

• mantl-control-03.node.example.com

• mantl-worker-001.node.example.com

• mantl-worker-002.node.example.com

• mantl-worker-003.node.example.com

• mantl-worker-004.node.example.com

2.1. Preparing to provision Cloud Hosts 17

Page 22: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

• mantl-kubeworker-001.node.example.com

• mantl-kubeworker-002.node.example.com

• mantl-edge-01.node.example.com

• mantl-edge-02.node.example.com

• control.example.com (pointing to control 1)

• control.example.com (pointing to control 2)

• control.example.com (pointing to control 3)

• *.example.com (pointing to edge node load balancer)

If you don’t want the DNS records hanging off the apex, you can specify the subdomain parameter to the DNSproviders, which will be inserted in the records just before the apex. For example, if subdomain=.mantl in theprevious config, the wildcard records would be *.mantl.example.com.

Warning: Due to a limitation in Terraform’s string support, the subdomain must begin with a period (for example.mantl).

The node records are intended to be used to access each node individually for maintenance. You can access thefrontend web components of the Mantl cluster through control.example.com, which will direct you to the restof the stack.

You can use the wildcard records for load-balanced access to any app in Marathon. For example, if you have an appnamed test running in Marathon, you can access it at test.example.com. Please see the Traefik configurationfor more details.

Configuration A good way to configure DNS is to move the values common to your cloud config and DNS configinto separate variables. You can do that like this:

variable control_count { default = 3 }variable worker_count { default = 2 }variable kubeworker_count { default = 2 }variable edge_count { default = 2 }variable short_name { default = "mantl" }

Then use those variables in the module like this:

module "dns" {source = "./terraform/cloudflare"

control_count = "${var.control_count}"control_ips = "${module.do-hosts.control_ips}"domain = "mantl.io"edge_count = "${var.edge_count}"edge_ips = "${module.do-hosts.edge_ips}"short_name = "${var.short_name}"subdomain = ".do.test"worker_count = "${var.worker_count}"worker_ips = "${module.do-hosts.worker_ips}"kubeworker_count = "${var.kubeworker_count}"kubeworker_ips = "${module.do-hosts.kubeworker_ips}"

}

18 Chapter 2. General Information about Mantl with Ansible

Page 23: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

Configuration Variables Configuration is done with a set of consistent variables across the providers:

control_count, worker_count, kubeworker_count, and edge_countThe count of nodes for each role.

control_ips, worker_ips, kubeworker_ips, and edge_ipsA comma-separated list of IPs. The cloud provider modules all export this as control_ips, worker_ips,kubeworker_ips, and edge_ips as well, so you can plug it in like so:

control_ips = "${module.your-hosts.control_ips}"

domainThe top level domain to add the records to.

Example: mantl.io

short_nameThe same short name passed into the cloud provider, used to generate consistent names.

subdomainA path to put between the top-level domain and the generated records. Must begin with a period.

Example: .apps

control_subdomainThe name for the control group (to generate control.yourdomain.com.) By default, this is control,but you can change it to whatever you’d like.

2.1.3 Amazon Web Services

New in version 1.0: multi-az support and terraform modularization

As of Mantl 0.3 you can bring up Amazon Web Services environments using Terraform. You can download Terraformfrom terraform.io14.

Configuring Amazon Web Services for Terraform

Before we can build any servers using Terraform and Ansible, we need to configure authentication. We’ll be filling inthe authentication variables for the template located at terraform/aws.sample.tf. The beginning of it lookslike this:

variable "amis" {default = {us-east-1 = "ami-6d1c2007"us-west-2 = "ami-d2c924b2"us-west-1 = "ami-af4333cf"eu-central-1 = "ami-9bf712f4"eu-west-1 = "ami-7abd0209"ap-southeast-1 = "ami-f068a193"ap-southeast-2 = "ami-fedafc9d"ap-northeast-1 = "ami-eec1c380"sa-east-1 = "ami-26b93b4a"

}}variable "availability_zones" {

default = "a,b,c"}

14https://www.terraform.io/downloads.html

2.1. Preparing to provision Cloud Hosts 19

Page 24: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

variable "control_count" { default = 3 }variable "datacenter" {default = "aws-us-west-2"}variable "edge_count" { default = 2 }variable "region" {default = "us-west-2"}variable "short_name" {default = "mantl"}variable "long_name" {default = "mantl"}variable "ssh_username" {default = "centos"}variable "worker_count" { default = 4 }variable "kubeworker_count" { default = 2 }variable "dns_subdomain" { default = ".dev" }variable "dns_domain" { default = "my-domain.com" }variable "dns_zone_id" { default = "XXXXXXXXXXXX" }variable "control_type" { default = "m3.medium" }variable "edge_type" { default = "m3.medium" }variable "worker_type" { default = "m3.large" }variable "kubeworker_type" { default = "m3.large" }

provider "aws" {region = "${var.region}"

}

Copy that file in it’s entirety to the root of the project as aws.tf to start customization. In the next sections, we’lldescribe the settings that you need to configure.

Do not copy the text contents above into a file, if you do not have the terraform/aws.sample.tf file, you need to clonethe mantl repository. Please note, newer versions of this file do not have “access_key” or “secret_key” lines, weautomatically find your AWS credentials from Amazon’s new “AWS Credentials file” standard.

Store your credentials like below in a file called ~/.aws/credentials on Linux/Mac, or%USERPROFILE%\.aws\credentials on Windows.

[default]aws_access_key_id = ACCESS_KEYaws_secret_access_key = SECRET_KEY

If you do not have an AWS access key ID and secret key, then follow the “Creating an IAM User” section below. Ifyou already have working AWS credentials, you can skip this step.

Creating an IAM User

Before running Terraform, we need to supply it with valid AWS credentials. While you could use the credentials foryour AWS root account, it is not recommended15. In this section, we’ll cover creating an IAM User16 that has thenecessary permissions to build your cluster with Terraform.

Note: You’ll need to have an existing AWS account with sufficient IAM permissions in order to follow along. If not,ask your account owner to perform this step for you.

First, sign in to your AWS Console and navigate to the Identity & Access Management (IAM)17 service.

15http://docs.aws.amazon.com/general/latest/gr/aws-access-keys-best-practices.html16http://docs.aws.amazon.com/IAM/latest/UserGuide/id.html17https://console.aws.amazon.com/iam/home

20 Chapter 2. General Information about Mantl with Ansible

Page 25: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

Next, navigate to the “Users” screen and click the “Create New Users” button.

You will be given the opportunity to create 5 different users on the next screen. For our purposes, we are just goingto create one: “mantl”. Make sure that you leave the “Generate an access key for each user” option checked and clickthe “Create” button.

2.1. Preparing to provision Cloud Hosts 21

Page 26: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

On the next screen, you will be able to view and download your new Access Key ID and Secret Access Key. Makesure you capture these values in a safe and secure place as you will need them in the next section. You won’t be ableto retrieve your secret key later (although you can generate a new one, if needed).

The next step is to grant permissions to your new IAM user. Navigate back to the “Users” section and then click onthe user name you just created. On the next screen, you will be able to manage the groups your user belongs to andto grant the permissions to view and modify AWS resources. For this example, we will not be using groups but thatwould be an option if you wanted to create multiple IAM users with the same permissions. We are going to keep itsimple and use a managed policy to grant the necessary permissions to our IAM user.

Click the “Attach Policy” button.

22 Chapter 2. General Information about Mantl with Ansible

Page 27: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

On the “Attach Policy” screen you will see a long list of pre-built permissions policies. You can either scroll throughthe list or use the search filter to find the policy named “AmazonEC2FullAccess”. Check the box next to that policyand click the “Attach Policy” button.

2.1. Preparing to provision Cloud Hosts 23

Page 28: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

That’s it. At this point, your IAM user has sufficient privileges to provision your cluster with Terraform.

Note: Technically the “AmazonEC2FullAccess” managed policy grants more permissions than are actually needed.If you are interested in configuring your IAM user with the minimum set of permissions to provision a cluster, youcan see the custom policy included at the bottom of this document.

Note: If you want to manage DNS with Route 53, you will need to attach a Route 53 policy as well.

Provider Settings

access_key and secret_key are the required credentials needed by Terraform to interact with resources in yourAWS account. AWS credentials can be retrieved when creating a new account or IAM user. New keys can be generatedand retrieved by managing Access Keys in the IAM Web Console. If you don’t want to commit these values in theTerraform template, you can add them to your ~/.aws/credentials18 file or source them from the environment instead:

AWS_ACCESS_KEY_IDThe AWS Access Key for a valid AWS account or IAM user with the necessary permissions.

18https://blogs.aws.amazon.com/security/post/Tx3D6U6WSFGOK2H/A-New-and-Standardized-Way-to-Manage-Credentials-in-the-AWS-SDKs

24 Chapter 2. General Information about Mantl with Ansible

Page 29: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

AWS_SECRET_ACCESS_KEYThe AWS secret key.

Note: As a best practice19, it is preferred that you use credentials for an IAM user with appropriate permissions ratherthan using root account credentials.

region is the AWS region20 where your cluster will be provisioned. As an alternative to specifying region in thefile, it can be read from the environment:

AWS_DEFAULT_REGIONThe AWS region in which to provision cluster instances.

Basic Settings

short_name is appended to the name tag and dns (if used) of each of the nodes to help better identify them. If youare planning to deploy multiple mantl clusters into the same AWS account, you’ll need to change this (otherwise AWSitems like ssh key names will conflict and the second ‘terraform plan‘ will fail)

• The defaults for the below settings will work out of the box in amazons US-WEST-1 Datacenter, change themif you don’t want these defaults, or if you want larger VM’s for each of the Mantl nodes *

region is the name of the region21 where your cluster resources will be provisioned. As long as your control, workerand edge count is greater than 1, your nodes will be spread across the availability zones in your region.

availability_zones are the availability zones in your region that you want to deploy your EC2 instances to.

source_ami is the EC2 AMI to use for your cluster instances. This must be an AMI id that is available in theregion your specified.

ssh_username is the default user name for SSH access to your cluster hosts. This value will be dependent on thesource_ami that you use. Common values are centos or ec2-user.

datacenter is a name to identify your datacenter, this is important if you have more than one datacenter.

control_count, edge_count and worker_count are the number of EC2 instances that will get deployed foreach node type.

control_type, edge_type and worker_type are used to specify the EC2 instance type22 for your controlnodes and worker nodes and they must be compatible with the source_ami you have specified. The default EC2instance type is an m3.medium.

Security Setup

Mantl doesn’t ship with default passwords or certs. For security, we have provided a script to generate all the securityconfiguration for your deployment.

Please run ./security_setup from the base of the mantl repository. This will generate certificates and othersecurity tokens needed for the mantl deployment, as well as prompting you for a mantl admin password.

If you get an ‘Import’ error when running security setup, your local machine lacks certain python modules that thescript needs. Please try ‘‘ pip install pyyaml ‘‘ and then re-run ./security_setup.

19http://docs.aws.amazon.com/general/latest/gr/aws-access-keys-best-practices.html20http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html21http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html22https://aws.amazon.com/ec2/instance-types/

2.1. Preparing to provision Cloud Hosts 25

Page 30: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

Provisioning

Once you’re all set up with the provider, customize your modules (for control_count, worker_count, etc), runterraform get to prepare Terraform to provision your cluster, terraform plan to see what will be created,and terraform apply to provision the cluster.

After terraform apply has completed without errors, you’re ready to continue. Next, follow the instructions atgetting started to install Mantl on your new AWS VM’s

• The below sections are for more information / customization only. They are not required *

Terraform State

Terraform stores the state23 of your infrastructure in a file called “terraform.tfstate”. This file can be stored locally orin a remote24 location such as S3. If you use the aws.sample.tf that is provided, by default the state of all themodules are stored in local terraform.tfstate file at the root of this project.

Instead of storing the state for all the modules in one file, you might deploy the modules independently and havedifferent terraform.tfstate for each module (either locally or remote). This can help with blue/green deployments, ormaking sure you don’t accidentally override more static parts of the infrastructure such as a VPC.

In the aws.sample.tf we have included examples of how you would reference a remote state file for VPC variables.

To create terraform.tfstate locally for the VPC module, you would simply run terraform get,terraform plan and terraform apply in the terraform/aws/vpc/ directory. Then in your aws.tffile you would want to comment out:

module "vpc" {source ="./terraform/aws/vpc"availability_zones = "${availability_zones}"short_name = "${var.short_name}"region = "${var.region}"

}

And uncomment:

#resource "terraform_remote_state" "vpc" {# backend = "_local"# config {# path = "./vpc/terraform.tfstate"# }# }

#availability_zones = "${terraform_remote_state.vpc.output.availability_zones}"#default_security_group_id = "${terraform_remote_state.vpc.output.default_security_group}"#vpc_id = "${terraform_remote_state.vpc.output.vpc_id}"#vpc_subnet_ids = "${terraform_remote_state.vpc.output.subnet_ids}"

Ideally you would store the state remotely, but configuring that is outside the scope of this document. This25 is a goodexplanation on how to configure and use remote state.

23https://www.terraform.io/docs/state/index.html24https://www.terraform.io/docs/state/index.html25http://blog.mattiasgees.be/2015/07/29/terraform-remote-state/

26 Chapter 2. General Information about Mantl with Ansible

Page 31: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

Custom IAM Policy

At the time of this writing, the following IAM policy grants the minimal permissions needed to provision an AWScluster with Terraform.

{"Version": "2012-10-17","Statement": [{

"Sid": "Stmt1433450536000","Effect": "Allow","Action": [

"ec2:AssociateRouteTable","ec2:AttachInternetGateway","ec2:AttachVolume","ec2:AuthorizeSecurityGroupIngress","ec2:CreateInternetGateway","ec2:CreateRoute","ec2:CreateRouteTable","ec2:CreateSecurityGroup","ec2:CreateSubnet","ec2:CreateTags","ec2:CreateVolume","ec2:CreateVpc","ec2:DeleteInternetGateway","ec2:DeleteKeyPair","ec2:DeleteRouteTable","ec2:DeleteSecurityGroup","ec2:DeleteSubnet","ec2:DeleteVolume","ec2:DeleteVpc","ec2:DescribeImages","ec2:DescribeInstanceAttribute","ec2:DescribeInstances","ec2:DescribeInternetGateways","ec2:DescribeKeyPairs","ec2:DescribeNetworkAcls","ec2:DescribeRouteTables","ec2:DescribeSecurityGroups","ec2:DescribeSubnets","ec2:DescribeVolumes","ec2:DescribeVpcAttribute","ec2:DescribeVpcClassicLink","ec2:DescribeVpcs","ec2:DetachInternetGateway","ec2:DetachVolume","ec2:DisassociateRouteTable","ec2:ImportKeyPair","ec2:ModifyInstanceAttribute","ec2:ModifyVpcAttribute","ec2:ReplaceRouteTableAssociation","ec2:RevokeSecurityGroupEgress","ec2:RunInstances","ec2:TerminateInstances","elasticloadbalancing:*","iam:AddRoleToInstanceProfile","iam:CreateInstanceProfile","iam:CreateRole",

2.1. Preparing to provision Cloud Hosts 27

Page 32: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

"iam:DeleteInstanceProfile","iam:DeleteRole","iam:DeleteRolePolicy","iam:DeleteServerCertificate","iam:GetInstanceProfile","iam:GetRole","iam:GetRolePolicy","iam:GetServerCertificate","iam:ListInstanceProfilesForRole","iam:PassRole","iam:PutRolePolicy","iam:RemoveRoleFromInstanceProfile","iam:UploadServerCertificate"

],"Resource": [

"*"]

}]

}

For managing DNS with Route 53, you can use a policy like the following:

{"Version": "2012-10-17","Statement": [{

"Effect": "Allow","Action": [

"route53:ChangeResourceRecordSets","route53:GetHostedZone","route53:ListResourceRecordSets"

],"Resource": "arn:aws:route53:::hostedzone/YOUR_ZONE_HOSTED_ID"

},{

"Effect": "Allow","Action": [

"route53:GetChange"],"Resource": "arn:aws:route53:::change/*"

}]

}

You would replace HOSTED_ZONE_ID with the hosted zone ID of your domain in Route 53.

2.1.4 DigitalOcean

New in version 0.3.

As of Mantl 0.3 you can bring up DigitalOcean environments using Terraform.

Configuring Terraform for DigitalOcean

Before we can build any servers using Terraform and Ansible, we need to configure authentication. We’ll be filling inthe authentication variables for the template located at terraform/digitalocean.sample.tf. It looks like

28 Chapter 2. General Information about Mantl with Ansible

Page 33: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

this:

# All of your resources will be prefixed by this namevariable "name" { default = "mantl" }variable "region" { default = "nyc3" } # Must have metadata support

provider "digitalocean" {token = ""

}

module "do-keypair" {name = "${var.name}"source = "./terraform/digitalocean/keypair"public_key_filename = "~/.ssh/id_rsa.pub"

}

module "control-nodes" {source = "./terraform/digitalocean/instance"name = "${var.name}"region = "${var.region}"keypair_id = "${module.do-keypair.keypair_id}"

role = "control"count = "3"

}

module "worker-nodes" {source = "./terraform/digitalocean/instance"name = "${var.name}"region = "${var.region}"keypair_id = "${module.do-keypair.keypair_id}"

role = "worker"}

module "kubeworker-nodes" {source = "./terraform/digitalocean/instance"name = "${var.name}"region = "${var.region}"keypair_id = "${module.do-keypair.keypair_id}"

role = "kubeworker"}

module "edge-nodes" {source = "./terraform/digitalocean/instance"name = "${var.name}"region = "${var.region}"keypair_id = "${module.do-keypair.keypair_id}"

role = "edge"count = "1"size = "2gb"

}

Copy that file in it’s entirety to the root of the project to start customization. In the next sections, we’ll explain how toobtain these settings.

2.1. Preparing to provision Cloud Hosts 29

Page 34: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

API Key

To use DigitalOcean from Terraform, you’ll need an API key. You can create one at the API page of DigitalOcean UI.

To create a token, click “Generate New Token” in the “Personal Access Tokens” list. Name your token (mantl couldbe a good name) and make sure that write access is granted.

Once your token is created, you will need to copy it to the “token” field of the DigitalOcean provider. The token willonly be displayed once, so do it before refreshing the page or navigating away.

30 Chapter 2. General Information about Mantl with Ansible

Page 35: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

If you don’t want to keep your access token in a file, you can instead write it to an environment variable:

DIGITALOCEAN_TOKENThe DigitalOcean token to use for Terraform-created resources

Regions and Sizes

As the sample notes, the region you deploy in must support metadata. All the newer regions support this (NYC3, forinstance.) If you don’t use a metadata-supported region Ansible will not know which roles to apply to your servers.

To find out if your desired region has metadata support:

curl -u "$DIGITALOCEAN_KEY:" https://api.digitalocean.com/v2/regions

This call will return a list of objects like the following. If "metadata" is in the “features” key, you can use theprovided slug as region_name to select it.

{"name": "New York 3","slug": "nyc3","sizes": [

"32gb","16gb","2gb","1gb","4gb","8gb","512mb","64gb","48gb"

],"features": [

"private_networking","backups","ipv6","metadata"

],"available": true

}

2.1. Preparing to provision Cloud Hosts 31

Page 36: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

Provisioning

Once you’re all set up with the provider, customize your modules (for control_count and worker_count), runterraform get to prepare Terraform to provision your cluster, terraform plan to see what will be created,and terraform apply to provision the cluster. Afterwards, you can use the instructions in getting started to installMantl on your new cluster.

2.1.5 VMware vSphere

You can bring up VMware vSphere environment using Terraform with builtin vSphere provider.

Prerequisites

Terraform

Install Terraform according to the guide26.

Note: Minimum Terraform version of 0.6.16 required to leverage some newer features of the vSphere provider

VMware template

Create VMware template27 for microservices cluster. You will able to change CPU and RAM parameters whileprovisioning a virtual machine from template with Terraform. It’s recommended to disable SELinux. Create userand add public RSA keys for SSH into the $HOME/.ssh/authorized_keys. It is required to have VMware tools in thetemplate, because we need to populate resulting .tfstate file with IP addresses of provisioned machines. Thisconfiguration was tested on CentOS 7.1 x64.

Configuring vSphere for Terraform

Provider settings

vsphere_server, user and password are the required parameters needed by Terraform to interact with re-sources in your vSphere. allow_unverified_ssl parameter is reponsible for checking SSL certificates of thevCenter. If you have self-signed certificates it is necessary to set this parameter to true.

VSPHERE_USERThe vSphere username with the necessary permissions.

VSPHERE_PASSWORDThe password of the user.

Basic settings

datacenter is the name of datacenter in your vSphere environment. It is required if the vSphere has severaldatacenters.

cluster is the name of the cluster in the selected datacenter. It’s an optional parameter.

26https://www.terraform.io/intro/getting-started/install.html27https://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.vsphere.vm_admin.doc_50%2FGUID-40BC4243-E4FA-4A46-

8C8B-F50D92C186ED.html

32 Chapter 2. General Information about Mantl with Ansible

Page 37: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

pool is the name of resource pool in vSphere. It’s an optional parameter. Requires full path to Pool such as Clus-ter_Name/Resources/Pool_Name

template is the name of a base VM or template you will deploy you machines from. Should include a path in VMfolder hierarchy: folder/subfolder/vm-name

network_label is the label of the network assigned to the machines.

domain is the domain name to configure each host with.

dns_server1 & dns_server2 are the dns servers to configure on the hosts.

short_name is the prefix that will be used for the new virtual machines.

ssh_user is the username for the further service provisioning. This user has to be in the sudoers group withNOPASSWD option.

ssh_key is the path to the SSH private key.

control_volume_size is the size in GB to create the data drive for the control nodes

worker_volume_size is the size in GB to create the data drive for the worker nodes

edge_volume_size is the size in GB to create the data drive for the edge nodes

datastore is the name of the datastore to create the use for the new VMs

Microservices settings

control_count and worker_count are the number of nodes for specific roles.

consul_dc the name of datacenter for Consul configuration.

Optional settings

There are several optional settings that can be leveraged in the sample terraform file. Just uncomment the line andconfigure the desired value.

folder set this to the name of a folder to place the new virtual machines into under the Datacenter object. Foldermust exist already.

control_cpu is the number of vCPUs to deploy for control nodes.

worker_cpu is the number of vCPUs to deploy for worker nodes.

edge_cpu is the number of vCPUs to deploy for edge nodes.

control_ram is the amount of vRAM in MBs to deploy for control nodes.

worker_ram is the number of vRAM in MBs to deploy for worker nodes.

edge_ram is the number of vRAM in MBs to deploy for edge nodes.

linked_clone setting this to true will deploy the VMs as linked clones. Default of false will create standard fullclones for each VM. Note that performance of linked clones is dependent on the underlying servers and storage. Insome cases linked_clone deployments have failed during installation where full clones work fine. If you get errorsduring the installation on a linked_clone setup, try using full clones and see if that fixes the error.

2.1. Preparing to provision Cloud Hosts 33

Page 38: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

Advanced settings

You also can change advanced settings in module file terraform/vsphere/main.tf

configuration_parameters are the custom parameters, for example specific service role.

Provisioning

Once you’re all set up with the provider, customize your module, run terraform get to prepare Terraform to provi-sion your cluster, terraform plan to see what will be created, and terraform apply to provision the cluster.At the end of provisioning Terraform will perform commands to change hostnames for correct service work. You canchange this behavior in the provisioner section for each resource in the terraform/vsphere/main.tf file.Due to a timing condition when requesting a MAC address from the vsphere server (ethernet0.addressType= "vpx") you may have to apply without the provisioner for a first time and issue terraform apply (withprovisioner) afterwards. This will allow the guest tools to provide the IP addresses.

Afterwards, you can use the instructions in getting started to install Mantl on your new cluster.

2.1.6 SoftLayer

New in version 0.3.3.

As of Mantl 0.3.3 you can bring up SoftLayer servers using Terraform. Mantl uses Terraform to provision hosts.

As of now, the released version of Terraform doesn’t have SoftLayer support, but one can build a custom binary withSoftLayer provisioning.<https://github.com/hashicorp/terraform/pull/2554>.

Configuring Terraform for SoftLayer

Before we can build any servers using Terraform and Ansible, we need to configure authentication. We’ll be filling inthe authentication variables for the template located at terraform/softlayer.sample.tf. It looks like this:

provider "softlayer" {}

module "softlayer-keypair" {source = "./terraform/softlayer/keypair"public_key_filename = "~/.ssh/id_rsa.pub"

}

module "softlayer-hosts" {source = "./terraform/softlayer/hosts"ssh_key = "${module.softlayer-keypair.keypair_id}"

hourly_billing = trueregion_name = "ams01"domain = "example.com"control_count = 3worker_count = 4edge_count = 2

}

Copy that file in it’s entirety to the root of the project to start customization. In the next sections, we’ll describe thesettings that you need to configure.

34 Chapter 2. General Information about Mantl with Ansible

Page 39: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

Username and API Key

You need to generate an API key for your SoftLayer account. This can be done in the control panel athttp://softlayer.com<http://softlayer.com>.

This token, along with your username, must be put in your softlayer.tf file. Alternatively, if you don’t want to putcredentials in the terraform file, you can set environment variables:

SOFTLAYER_USERNAMEThe SoftLayer username

SOFTLAYER_API_KEYThe SoftLayer API key

Provisioning

Once you’re all set up with the provider, customize your modules (for control_count and worker_count), runterraform get to prepare Terraform to provision your cluster, terraform plan to see what will be created,and terraform apply to provision the cluster. Afterwards, you can use the instructions in getting started to installMantl on your new cluster.

2.1.7 CenturyLinkCloud

New in version 1.0.3.

Terraform can use CLC to provision hosts for your cluster. You can download Terraform from terraform.io28.

Documentation on using the CLC driver with terraform is available here29.

NOTE: The CLC driver may not yet be available in the main terraform distribution. See alsohttps://github.com/CenturyLinkCloud/terraform-provider-clc if absent.

Configuring Terraform

From the project root, copy the template located at terraform/clc.sample.tf to ./clc.tf

In order to provision to CLC via terraform, login credentials are required. Trial accounts with free credits are available,sign-up here30.

Account Setup

Be sure you have minimally set up you CLC account with the following:

• A default network31

• Recommended: a dedicated user/pass for use in provisioning with terraform

28https://www.terraform.io/downloads.html29https://www.terraform.io/docs/providers/clc/index.html30https://www.ctl.io31https://control.ctl.io/Network/network/Create

2.1. Preparing to provision Cloud Hosts 35

Page 40: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

Provider Settings

The driver accepts either via environment variables or credentials inlined as provider config.

By environment variables:

CLC_USERNAME

CLC_PASSWORD

CLC_ACCOUNT

Or conversely, via provider config:...variable ssh_key { default = "~/.ssh/id_rsa.pub" }

provider "clc" {username = "<clc username goes here>"password = "<clc password goes here>"account = "<clc account goes here>"

}

Basic Settings

location is the datacenter where your cluster will be deployed to. The clc_group.mantl server group will hold allthe generated nodes.

{control|worker|edge}_count controls the number of nodes deployed to each role.

ssh_pass is the initial server password for root. It’s advised to test whatever password provided here before using itagainst terraform.

ssh_key is a public key that will be installed into root’s authorized_keys.

Additional settings are available for customization in ./terraform/clc/node.tf.

Provisioning

Once you’ve reviewed and/or modified the settings, terraform getwill prepare your cluster, terraform plancan be reviewed to check the deployment, and terraform apply will provision the cluster. Afterwards, you canuse the instructions in getting started to install Mantl on your new cluster.

2.1.8 Bare-Metal

With respect to Mantl, a bare-metal environment means a set of physical computers that have Centos 7 installed onthem.

If you are using Openstack, VMware, or some other cloud provider, Mantl.io has terraform scripts for you. Froma Mantl.io perspective, this doc is about setting up the inventory file by hand and preparing the machines to a statesimilar to what Terraform would have done.

The minimum requirements for installing Mantl (based on the AWS sample) are edge, control and worker nodes with1 core and about 4 GB of RAM. This documentation is really a description of how to set up a static inventory file whenyou don’t create your inventory with terraform.py. There is nothing about this document that requires that theybe physical systems.

This document explains:

36 Chapter 2. General Information about Mantl with Ansible

Page 41: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

• Preparing your machines with Centos

• Network and storage concerns

• Creating your inventory

• Setting up Ansible

2.1.9 Setting Up Centos 7

Thumb Drive Install

There are more professional ways of creating your instances, but if you are looking for a solution for a couple ofmachines at home, you will need some tips on how to do it. The least technical way to do this is with a thumb drive.

Create a bootable USB drive with Centos 7. This can be a bit confusing, we recommend the following two tutorialsfor OSX:

• http://www.myiphoneadventure.com/os-x/create-a-bootable-centos-usb-drive-with-a-mac-os-x

• http://www.ubuntu.com/download/desktop/create-a-usb-stick-on-mac-osx

Mantl.io requires the latest build of Centos 7.

During installation you will use the defaults except:

• Manually configure your partitions. On the Manual partioning page:

• Remove existing partitions

• Press the button to automatically partition. This will give you a default set to start with.

• The automatic partioning will put 50 toward root and the rest in home, change this:

– These are services machines and won’t store many files in home, Home should be set to a small partionsize leaving you with some unpartioned space.

– You will need to leave unformatted space on the drive for Docker. Try to leave at least 50 unformated forthe Docker LVM partion that is described in the “Create Partion for Docker LVM” section below.

• Turn on your wired internet connection. It should just be a toggle switch for your device

• Once the install starts, it asks for a root password and a first user

• Having a centos admin user will match what happens in the cloud environments

Once rebooted, if you forgot to turn on your internet in the install, you can set it up using the following tutorial:http://www.krizna.com/centos/setup-network-centos-7/ . It might be easier and more automated (therefore less errorprone) to just reinstall and remember to turn on your internet during the install.

Set up Base Network

Chosing a static IP range

I chose 172.16.222.x because its unlikely to overlap with any network I might move this cluster to.

2.1. Preparing to provision Cloud Hosts 37

Page 42: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

Give it a static IP and set DNS and Gateway

http://ask.xmodulo.com/configure-static-ip-address-centos7.html

At the command line enter:

ip addr

You should see somethng like:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWNlink/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lo

valid_lft forever preferred_lft foreverinet6 ::1/128 scope host

valid_lft forever preferred_lft forever2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

link/ether b8:ae:ed:71:6c:06 brd ff:ff:ff:ff:ff:ffinet 172.16.222.22/24 brd 172.16.222.255 scope global eno1

valid_lft forever preferred_lft foreverinet6 fe80::baae:edff:fe71:6c06/64 scope link

valid_lft forever preferred_lft forever

From this you can see that eno1 is the ethernet device.

Edit /etc/sysconfig/network-scripts/ifcfg-eno1

You can leave everything that is in there but you need to to change or add the following. BOOTPROTO and ONBOOTare probably already there.

BOOTPROTO="static"IPADDR="172.16.222.6"GATEWAY="172.16.222.1"NETMASK="255.255.255.0"DNS1="8.8.8.8"DNS2="208.67.222.222"NM_CONTROLLED=noONBOOT="yes"

The DNS lines are going to have to change once consul is up.

NOTE: in Centos 7 /etc/resolv.conf is a generated file.

You could also put the dns lines in /etc/sysconfig/network.

Permanently change your hostname with:

hostnamectl set-hostname edge22

After saving then finally:

systemctl restart network

Create Partion For Docker LVM

• su

• parted /dev/sda print

• fdisk /dev/sda

38 Chapter 2. General Information about Mantl with Ansible

Page 43: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

• Command: n

• partion : default

• please note which partition it is in. So if its partition 5, eventually you will need to tell mantl /dev/sda5 for theLVM

• you kinda want all your machines to use the same partition because this partition is entered as a system widevariable

• first sector: default

• last sector: +50G

• Command: w

• reboot

Don’t put a file system on the partion.

Note: I am creating a partion size of 50 Gigs, this is for docker. Just make it consistent across your cluster.

There are two main types of drives on the market today. The older type of drive is said to have MS-DOS partions.When partioning these types of drives you will be asked if you want to create a primary partion or a extendedpartition. You will need to make it a primary partition.

Additionally, if you have a MS-DOS partioned drive you may have to run the following patch:https://github.com/ansible/ansible-modules-extras/issues/1504 against the file /Library/lvg.py. If during the ansiblerun (as described in the section “Run It!” below) the run hangs on task lvm | create volume group then youwill need to follow the instructions in issue 1504.

Here is an example inventory file. It should be placed in the root of the mantl directory.

[role=control]control-01 private_ipv4=172.16.222.6 ansible_ssh_host=172.16.222.6control-02 private_ipv4=172.16.222.7 ansible_ssh_host=172.16.222.7control-03 private_ipv4=172.16.222.8 ansible_ssh_host=172.16.222.8

[role=control:vars]consul_is_server=truelvm_physical_device=/dev/sda3

[role=worker]worker-001 private_ipv4=172.16.222.11 ansible_ssh_host=172.16.222.11worker-002 private_ipv4=172.16.222.12 ansible_ssh_host=172.16.222.12worker-003 private_ipv4=172.16.222.13 ansible_ssh_host=172.16.222.13

[role=worker:vars]consul_is_server=falselvm_physical_device=/dev/sda3

[role=edge]edge-01 private_ipv4=172.16.222.16 ansible_ssh_host=172.16.222.16edge-02 private_ipv4=172.16.222.17 ansible_ssh_host=172.16.222.17

[role=edge:vars]consul_is_server=falselvm_physical_device=/dev/sda3

[dc=dc1]control-01control-02control-03

2.1. Preparing to provision Cloud Hosts 39

Page 44: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

worker-001worker-002worker-003edge-01edge-02

I had to add the ansible_ssh_host line to run playbooks/reboot-hosts.yml and the private_ipv4is needed by several roles.

The dc=dc1 group is needed to set consul_dc_group in the consul role. It is used in the dnsmasq role. dc1is the default. If you change the name of the data center in your inventory file you will need to set the consul_dcvariable. For example, if you called your dc ‘mydc’ then you would need to enter:

ansible-playbook -u centos -i inventory -e consul_dc=mydc \-e provider=bare-metal -e @security.yml sample.yml >& bare-metal.log

The rest of the options will be discussed below.

2.1.10 Getting Started with Ansible

Add your key to all the machines in your inventory

ansible all -i inventory -u centos -k -m authorized_key -a "user=centos key=https://github.com/youraccount.keys"

Note this makes use of your public key on Github. If you don’t have a Github account or a key pair on your Githubaccount, please look at the documentation for Ansible authorized_key module for other options.

The -k is needed because the ssh connection is still uses password based authentication.

After this authorization step has been completed, all commands can happen without the password and -k option. Testwith:

ansible all -i inventory -u centos -m ping

You should get back a pong from each machine in your inventory.

Copy the /etc/host file over

Add your nodes to /etc/hosts:

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6172.16.222.5 MyMac172.16.222.6 control-01172.16.222.7 control-02172.16.222.8 control-03172.16.222.11 worker-01172.16.222.12 worker-02172.16.222.13 worker-03172.16.222.16 edge-01172.16.222.17 edge-02

Copy the /etc/hosts file over to all your nodes:

ansible all -i inventory -u centos --sudo --ask-sudo-pass -m copy -a "src=hosts dest=/etc/hosts"

You now are ready to run the playbook. Change directory to the project root. Your inventory should be there.

Run the security-setup script:

40 Chapter 2. General Information about Mantl with Ansible

Page 45: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

./security-setup

It asks for one admin password. At the end of that run there will be a security.yml file. It will have the passwordyou entered and a lot of keys needed for installation.

The playbook you will be running is sample.yml. Since you created your own inventory and didn’t use Terraform,there are a few variables you need to set for your run.

ansible-playbook -u centos -i inventory \-e provider=bare-metal \-e consul_dc=dc1 \-e docker_lvm_backed=true \-e docker_lvm_data_volume_size="80%FREE" \-e @security.yml sample.yml >& bare-metal.log

In another window tail -f that log file to follow whats going on.

The meaning of the parts of this command are as follows:

ansible-playbook -u centos -i inventory run the ansible play book as centos user against the inven-tory found in the ./inventory file.

-e provider=bare-metal The “provider” is bare-metal where a user sets up the infrastructure and then createsan inventory file as described above. If the inventory had been generated by terraform.py against a terraformstate file for infrastructure built on Google Cloud, this value would have been set automatically to ‘gcs’

-e consul_dc=dc1 This is the name found in your ./inventory file for your datacenter.

-e docker_lvm_backed=true LVM-backed docker is a really good idea in centos. This is why you craetedthe extra partion during installation.

-e docker_lvm_data_volume_size="80%FREE" This defaults to “40%FREE” in the docker role becausethe default LVM partition is shared with other things. You could leave this off, but its likely with your ownhardware you will have different constraints and its a good variable to know.

-e @security.yml This a series of variables that have all the security settings of the various parts of Mantl. The@ causes Ansible to evaluate the file.

sample.yml This is the ansible file that is being run.

>& bare-metal.log This redirects the output to a file so that you can review it later. Tailing with a -f flag letsyou watch the progress as ansible works through the rolls accross your inventory.

Once you are done, go to the browser and go to the IP address of any control node and you should see the Mantl UI.For the inventory shown above, you could go to 172.16.222.6/ui.

2.1.11 Vagrant

New in version 1.0.

Vagrant32 is used to “Create and configure lightweight, reproducible, and portable development environments.” Weuse it to test Mantl locally before deploying to a cloud provider. Our current setup creates a configurable number ofvirtual machines, and you can define how many you want to build using a configuration file as described below. Oneof the control servers provisions the others using the sample.yml playbook.

32https://www.vagrantup.com/

2.1. Preparing to provision Cloud Hosts 41

Page 46: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

Getting Started

Simply run vagrant up. If you’d like to customize your build futher, you can create a vagrant-config.yml file inthe project’s root directory with variables as defined in the “Variables” section below.

Variables

You can find the default values for all these variables in the config_hash in the provided Vagrantfile.

worker_count, control_count, edge_count, kubeworker_countThe number of nodes with this role.

worker_ip_start, control_ip_start, edge_ip_start, kubeworker_ip_startA base IP address which will have its last digit appended. For example, if worker_ip_start is setto “192.168.100.10”, the first worker node will have the IP address 192.168.100.101, the second will have192.168.100.102, etc.

worker_memory, control_memory, edge_memory, kubeworker_memoryThe amount of memory in MB to allocate for each kind of VM. This setting is only valid for the virtualboxprovider.

worker_cpus, control_cpus, edge_cpus, kubeworker_cpusThe number of CPUs to allocate for each kind of VM. This setting is only valid for the virtualbox provider.

networkDefault: private. Which type of Vagrant network to provision. Seehttps://www.vagrantup.com/docs/networking/index.html

playbooksAn array of paths to Ansible playbooks to run during the provisioning step. For exam-ple, to attempt to run the GlusterFS addon (./addons/glusterfs.yml), you would add a/vagrant/addons/glusterfs.yml entry. You can also use this directive to run playbooks otherthan sample.yml after provisioning for the first time, by modifying this variable and running vagrantprovision.

Limitations

Mantl will likely experience stability issues with one control node. As stated in the Consul docs33, this setup isinherently unstable.

Moreover, GlusterFS and LVM are not supported on Vagrant, and Traefik (edge nodes) are turned off by default.GlusterFS support might happen in the future, but it is an optional feature and not a priority.

2.1.12 Triton

New in version 1.1.

As of Mantl 1.1 you can bring a Mantl cluster up on Joyent’s Triton. Please be sure to use at least Terraform 0.6.15:the first version with the required resources.

33https://www.consul.io/docs/guides/bootstrapping.html

42 Chapter 2. General Information about Mantl with Ansible

Page 47: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

Configuring Triton for Terraform

Before we can build any servers using Terraform and Ansible, we need to configure authentication. We’ll be fillingin the authentication variables for the template located at terraform/triton.sample.tf. The beginning of itlooks like this:

# this sample assumes that you have `SDC_ACCOUNT`, `SDC_KEY_MATERIAL`,# `SDC_KEY_ID`, and `SDC_URL` in your environment from (for example) using the# Triton command-line utilities. If you don't, set `account`, `key_material`,# `key_id`, and `url` in the provider belowprovider "triton" {}

variable key_path { default = "~/.ssh/id_rsa.pub" }

Copy terraform/triton.sample.tf in it’s entierty to the root of the project as triton.tf to start cus-tomization. In the next section, we’ll explain how to obtain the settings mentioned above.

Basic Settings

First, we’ll need an account. This is the username you use to log into Triton. You can create an account at joyent.com34;refer to their getting started documentation35 for more information. Use the SDC_ACCOUNT environment variable, orset account in the provider (see sample below.)

Note: new Joyent accounts may be subject to provisioning limits. Contact Joyent support36 to have those limitsraised.

We’ll also need your key material and ID. Key material is the material of the public key used to authenticate requests.You can set SDC_KEY_MATERIAL with this info, or use Terraform’s file interpolation, shown below. They keyID is displayed in your Triton account page, but you can obtain it by running ssh-keygen -l -E md5 -f/path/to/your/key/id_rsa.pub. Set this as key_id or SDC_KEY_ID in the environment.

Last, you’ll need to specify the datacenter you want to operate in (key: url.) The default is us-east-1, andthe general format is https://{datacenter-slug}.api.joyentcloud.com. If unset, this will be pulledfrom SDC_URL. You can select from any of Joyent’s public data centers37, or enter a custom URL for a private datacenter38.

Finally, here’s an example with the variables set:

provider "triton" {account = "AccountName"key_material = "${file("~/.ssh/id_rsa.pub")}"key_id = "25:d4:a9:fe:ef:e6:c0:bf:b4:4b:4b:d4:a8:8f:01:0f"

# specify the datacenter by giving the API URLurl = "https://us-east-1.api.joyentcloud.com"

}

Provisioning

Once your provider is set up, customize your modules (mainly for the variables ending in _count to control scaling.)Run terraform get to prepare the modules, terraform plan to see what will be created, and terraform

34https://www.joyent.com35https://docs.joyent.com/public-cloud/getting-started36https://docs.joyent.com/public-cloud/getting-started/limits37https://docs.joyent.com/public-cloud/data-centers38https://github.com/joyent/sdc

2.1. Preparing to provision Cloud Hosts 43

Page 48: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

apply to provision the cluster. Afterwards, you can use the instructions in getting started to install Mantl on yournew cluster.

Configuring DNS

You can set up your DNS records with Terraform:

DNS

New in version 0.3.

Terraform lets you configure DNS for your instances. The DNS provider is loosely coupled from the server provider,so you could for example use the dnsimple provider for either OpenStack or AWS hosts, or use the Google Cloud DNSprovider for DigitalOcean hosts.

Providers These are the supported DNS providers:

CloudFlare New in version 0.5.

Terraform can use CloudFlare to provide DNS records for your cluster, independent of which provider you use toprovision your servers.

CloudFlare Username/Token The easiest way to configure credentials for CloudFlare is by setting environmentvariables:

CLOUDFLARE_EMAILYour e-mail address for the CloudFlare account

CLOUDFLARE_TOKENThe CloudFlare token (found in the CloudFlare admin panel)

Alternatively, you can set up the CloudFlare provider credentials in your .tf file:

provider "cloudflare" {email = "the e-mail address for your CloudFlare account"token = "your CloudFlare token"

}

DNSimple New in version 0.3.

Terraform can use DNSimple to provide DNS records for your cluster, independent of which provider you use toprovision your servers.

DNSimple Username/Token The easiest way to configure credentials for DNSimple is by setting environment vari-ables:

DNSIMPLE_EMAILYour e-mail address for the DNSimple account

DNSIMPLE_TOKENThe DNSimple token (found in the DNSimple admin panel)

Alternatively, you can set up the DNSimple provider credentials in your .tf file:

44 Chapter 2. General Information about Mantl with Ansible

Page 49: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

provider "dnsimple" {token = "your dnsimple token"email = "your e-mail address for the dnsimple account"

}

GCP Cloud DNS Terraform can use google_dns_record_set resources to provide DNS records for yourcluster.

In addition to the normal DNS variables, you will need to specify the managed_zone parameter. You can find yourManaged Zone name in the GCP Netowrking Console.

If you haven’t set up a managed zone for the domain you’re using, you can do that with Terraform as well, just addthis extra snippet in your .tf file:

resource "google_dns_managed_zone" "managed-zone" {name = "my-managed-zone"dns_name = "example.com."description "Managed zone for example.com."

}

In your gce.tf, you will want to enable the cloud-dns module:

module "cloud-dns" {source = "./terraform/gce/dns"

2.1. Preparing to provision Cloud Hosts 45

Page 50: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

control_count = "${var.control_count}"control_ips = "${module.control-nodes.control_ips}"domain = "mydomain.com"edge_count = "${var.edge_count}"edge_ips = "${module.edge-nodes.edge_ips}"lb_ip = "${module.network-lb.public_ip}"managed_zone = "my-cloud-dns-zone"short_name = "${var.short_name}"subdomain = "service"worker_count = "${var.worker_count}"worker_ips = "${module.worker-nodes.worker_ips}"

}

Route53 Terraform can use aws_route53_record resources to provide DNS records for your cluster.

In addition to the normal DNS variables, you will need to specify the hosted_zone_id parameter. You can findyour own hosted zone ID in your AWS Route 53 console.

Route53 uses your normal Amazon Web Services provider credentials.

# Example setup for an AWS Route53module "route53" {

source = "./terraform/aws/route53/dns"control_count = "${var.control_count}"control_ips = "${module.control-nodes.ec2_ips}"domain = "my-domain.com"edge_count = "${var.edge_count}"edge_ips = "${module.edge-nodes.ec2_ips}"hosted_zone_id = "XXXXXXXXXXXX"short_name = "${var.short_name}"subdomain = ".dev"worker_count = "${var.worker_count}"worker_ips = "${module.worker-nodes.ec2_ips}"kubeworker_count = "${var.kubeworker_count}"kubeworker_ips = "${module.kubeworker-nodes.ec2_ips}"

}

DNS Records and Configuration The providers create a uniform set of DNS A records:

• {short-name}-control-{nn}.node{subdomain}.{domain}

• {short-name}-edge-{nn}.node{subdomain}.{domain}

• {short-name}-worker-{nnn}.node{subdomain}.{domain}

• {control}{subdomain}.{domain}

46 Chapter 2. General Information about Mantl with Ansible

Page 51: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

• *.{subdomain}.{domain}

For example, with short-name=mantl, domain=example.com, a blank subdomain, 3 control nodes, 4 workernodes, 2 Kubernetes worker nodes, and 2 edge nodes, that will give us these DNS records:

• mantl-control-01.node.example.com

• mantl-control-02.node.example.com

• mantl-control-03.node.example.com

• mantl-worker-001.node.example.com

• mantl-worker-002.node.example.com

• mantl-worker-003.node.example.com

• mantl-worker-004.node.example.com

• mantl-kubeworker-001.node.example.com

• mantl-kubeworker-002.node.example.com

• mantl-edge-01.node.example.com

• mantl-edge-02.node.example.com

• control.example.com (pointing to control 1)

• control.example.com (pointing to control 2)

• control.example.com (pointing to control 3)

• *.example.com (pointing to edge node load balancer)

If you don’t want the DNS records hanging off the apex, you can specify the subdomain parameter to the DNSproviders, which will be inserted in the records just before the apex. For example, if subdomain=.mantl in theprevious config, the wildcard records would be *.mantl.example.com.

Warning: Due to a limitation in Terraform’s string support, the subdomain must begin with a period (for example.mantl).

The node records are intended to be used to access each node individually for maintenance. You can access thefrontend web components of the Mantl cluster through control.example.com, which will direct you to the restof the stack.

You can use the wildcard records for load-balanced access to any app in Marathon. For example, if you have an appnamed test running in Marathon, you can access it at test.example.com. Please see the Traefik configurationfor more details.

Configuration A good way to configure DNS is to move the values common to your cloud config and DNS configinto separate variables. You can do that like this:

variable control_count { default = 3 }variable worker_count { default = 2 }variable kubeworker_count { default = 2 }variable edge_count { default = 2 }variable short_name { default = "mantl" }

Then use those variables in the module like this:

2.1. Preparing to provision Cloud Hosts 47

Page 52: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

module "dns" {source = "./terraform/cloudflare"

control_count = "${var.control_count}"control_ips = "${module.do-hosts.control_ips}"domain = "mantl.io"edge_count = "${var.edge_count}"edge_ips = "${module.do-hosts.edge_ips}"short_name = "${var.short_name}"subdomain = ".do.test"worker_count = "${var.worker_count}"worker_ips = "${module.do-hosts.worker_ips}"kubeworker_count = "${var.kubeworker_count}"kubeworker_ips = "${module.do-hosts.kubeworker_ips}"

}

Configuration Variables Configuration is done with a set of consistent variables across the providers:

control_count, worker_count, kubeworker_count, and edge_countThe count of nodes for each role.

control_ips, worker_ips, kubeworker_ips, and edge_ipsA comma-separated list of IPs. The cloud provider modules all export this as control_ips, worker_ips,kubeworker_ips, and edge_ips as well, so you can plug it in like so:

control_ips = "${module.your-hosts.control_ips}"

domainThe top level domain to add the records to.

Example: mantl.io

short_nameThe same short name passed into the cloud provider, used to generate consistent names.

subdomainA path to put between the top-level domain and the generated records. Must begin with a period.

Example: .apps

control_subdomainThe name for the control group (to generate control.yourdomain.com.) By default, this is control,but you can change it to whatever you’d like.

Community-supported platform: .. toctree:

:maxdepth:1

vsphere.rstsoftplayer.rstbare-metal.rst

There are several preparatory steps to provisioning the cloud hosts that are common to all providers:

48 Chapter 2. General Information about Mantl with Ansible

Page 53: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

2.1.13 Step 1: Copy .tf file

You will need to copy the .tf file of the platform you are using from mantl/terraform/39 to the root of theproject. For example, mantl/terraform/openstack-modules.sample.tf will need to be copied tomantl/openstack-module-sample.tf. The variables in the copied .tf file will need to be changed to yourconfiguration.

Note: More than one .tf file in the mantl directory will lead to errors upon deployment. Since Mantl initially doesnot support multiple datacenters, extra .tf files will need to be renamed or moved. If you would like to add multipledatacenters see the Consul docs40 for more information.

2.1.14 Step 2: Run security-setup

Running the security-setup script in the root directory will set up passwords, authentication, and certificates.For more information, see the security-setup script documentation.

2.1.15 Step 3: Set up DNS records

You can set up your DNS records with Terraform. See DNS.

2.2 Deploying software via Ansible

Note: Ansible requres a Python 2 binary. If yours is not at /usr/bin/python, please viewthe Ansible FAQ41. You can add an extra variable to the following commands, e.g. ansible -eansible_python_interpreter=/path/to/python2.

The following steps assume that you have provisioned your cloud host by taking the steps listed in one of the guidesabove.

This project ships with a dynamic inventory file to read Terraform .tfstate files, terraform.py42. If you are notrunning Ansible from the root directory or would like to use a custom inventory file, you can use the -i argument ofansible or ansible-playbook to specify the inventory file path.

ansible-playbook -i path/to/inventory -e @security.yml mantl.yml

2.2.1 Step 1: Add password to the ssh-agent

For the next steps, you may want to add your password to the ssh-agent43 avoid re-entering your password.

2.2.2 Step 2: Ping the servers to ensure they are reachable via ssh

ansible all -m ping

39https://github.com/CiscoCloud/mantl/tree/master/terraform40https://www.consul.io/docs/guides/datacenters.html41http://docs.ansible.com/faq.html42https://github.com/CiscoCloud/mantl/blob/master/plugins/inventory/terraform.py43https://wiki.archlinux.org/index.php?title=SSH_keys&redirect=no#SSH_agents

2.2. Deploying software via Ansible 49

Page 54: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

It may take a few minutes after running terraform for the servers to be reachable. If any servers fail to connect, youcan check your connection by adding -vvvv for verbose SSH debugging to view the errors in more detail.

2.2.3 Step 3: Upgrade packages

Warning: Due to updated packages in the recent CentOS 7 (1511) release, it is critical that you upgrade operatingsystem packages on all servers before proceeding with the deployment:

ansible-playbook playbooks/upgrade-packages.yml

If you neglect to upgrade packages, you will likely experience multiple failures, particularly around Consul. See issues90744 and 92745 for more details.

2.2.4 Step 4: Deploy the software

First, you will need to customize a playbook. A sample can be found at sample.yml in the root directory whichyou can copy to mantl.yml. You can find more about customizing this at playbooks46. You’ll want to changeconsul_acl_datacenter to your preferred ACL datacenter. If you only have one datacenter, you can removethis variable.

Next, assuming you’ve placed the filled-out template at mantl.yml:

ansible-playbook -e @security.yml mantl.yml

The deployment will probably take a while as all tasks are completed.

2.3 Checking your deployment

Once your deployment has completed, you will be able to access the Mantl UI in your browser by connecting to oneof the control nodes.

If you need the IP address of your nodes, you can use terraform.py:

$ python2 plugins/inventory/terraform.py --hostfile## begin hosts generated by terraform.py ##xxx.xxx.xxx.xxx mantl-control-01xxx.xxx.xxx.xxx mantl-control-02xxx.xxx.xxx.xxx mantl-control-03xxx.xxx.xxx.xxx mantl-edge-01xxx.xxx.xxx.xxx mantl-edge-02xxx.xxx.xxx.xxx mantl-kubeworker-001xxx.xxx.xxx.xxx mantl-kubeworker-002xxx.xxx.xxx.xxx mantl-kubeworker-003xxx.xxx.xxx.xxx mantl-worker-001xxx.xxx.xxx.xxx mantl-worker-002xxx.xxx.xxx.xxx mantl-worker-003xxx.xxx.xxx.xxx mantl-worker-004xxx.xxx.xxx.xxx mantl-worker-005## end hosts generated by terraform.py ##

44https://github.com/CiscoCloud/mantl/issues/90745https://github.com/CiscoCloud/mantl/issues/92746http://docs.ansible.com/ansible/playbooks.html

50 Chapter 2. General Information about Mantl with Ansible

Page 55: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

When you enter a control node’s IP address into your browser, you’ll likely get prompted about invalid securitycertificates (if you have SSL/TLS turned on). Follow your browser’s instructions on how to access a site without validcerts. Then, you will be presented with a basic access authentication prompt. The username and password for this arethe ones generated by security-setup, and are stored in security.yml if you forgot them.

Here is what you should be looking at after you connect and authenticate: 47 Click the image to go to the GitHubproject48

2.3.1 Customizing your deployment

Below are guides customizing your deployment:

Adding SSH Users

If you want to add more users to the servers, create a file (e.g. users.yml). Below is an example. Each publicssh key should be on a single line. The users.yml file will need to be passed to ansible-playbook with [email protected].

Warning: All users added in this file will have root access via sudo.

---users:- name: user1enabled: 1pubkeys:

- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABA.....

- name: user2enabled: 1pubkeys:

- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAABA......

Custom Playbook

Your customized mantl.yml playbook49 should be used to deploy Mantl to your servers.

Below is an annotated playbook explaining the values:

---# CHECK SECURITY - when customizing you should leave this in. If you# take it out and forget to specify security.yml, security could be turned off# on components in your cluster!- include: "{{ playbook_dir }}/playbooks/check-requirements.yml"

# BASICS - we need every node in the cluster to have common software running to# increase stability and enable service discovery. You can look at the# documentation for each of these components in their README file in the# `roles/` directory, or by checking the online documentation at# docs.mantl.io.- hosts: all

vars:

47https://github.com/CiscoCloud/nginx-mantlui48https://github.com/CiscoCloud/nginx-mantlui49http://docs.ansible.com/playbooks.html

2.3. Checking your deployment 51

Page 56: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

# consul_acl_datacenter should be set to the datacenter you want to control# Consul ACLs. If you only have one datacenter, set it to that or remove# this variable.# consul_acl_datacenter: your_primary_datacenter

consul_servers_group: role=controlroles:- common- certificates- lvm- docker- logrotate- consul-template- nginx- consul- etcd- dnsmasq

# ROLES - after provisioning the software that's common to all hosts, we do# specialized hosts. There are 4 built-in roles: control, workers (mesos),# kubeworkers (kubernetes), and edge (for external traffic into the cluster).

# The control nodes are necessarily more complex than the worker nodes, and have# ZooKeeper, Mesos, and Marathon leaders as well as Kubernetes master# components. In addition, they control Vault to manage secrets in the cluster.# These servers do not run applications by themselves, they only schedule work.# That said, there should be at least 3 of them (and always an odd number) so# that ZooKeeper can get and keep a quorum.- hosts: role=control

gather_facts: yesvars:consul_servers_group: role=controlmesos_leaders_group: role=controlmesos_mode: leaderzookeeper_server_group: role=control

roles:- vault- zookeeper- mesos- calico- marathon- mantlui- kubernetes- kubernetes-master

# The worker role itself has a minimal configuration, as it's designed mainly to# run software that the Mesos leader shedules. It also forwards traffic to# globally known ports configured through Marathon.- hosts: role=worker

# role=worker hosts are a subset of "all". Since we already gathered facts on# all servers, we can skip it here to speed up the deployment.gather_facts: novars:mesos_mode: followerzookeeper_server_group: role=control

roles:- mesos- calico

52 Chapter 2. General Information about Mantl with Ansible

Page 57: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

# The kubeworker role is similar to the worker role but are intended for# Kubernetes workloads.- hosts: role=kubeworker

gather_facts: yesroles:- calico- kubernetes- kubernetes-node

# The edge role exists solely for routing traffic into the cluster. Firewall# settings should be such that web traffic (ports 80 and 443) is exposed to the# world.- hosts: role=edge

gather_facts: yesvars:# this is the domain that traefik will match on to do host-based HTTP# routing. Set it to a domain you control and add a star domain to route# traffic. (EG *.marathon.localhost)## For those migrating from haproxy, this variable serves the same purpose# and format as `haproxy_domain`.traefik_marathon_domain: marathon.localhost

roles:- traefik

Run this playbook with ansible-playbook -i plugins/inventory/terraform.py [email protected] /path/to/your/playbook.yml. It will take a while for everything to come up asfresh machines will have to download quite a few dependencies.

Using the Dockerfile

New in version 0.3.1.

Note: Please review the getting started guide for more detailed information about setting up a cluster.

Setup

1. Before you begin, it is recommended that you run the security-setup script to configure authentication andauthorization for the various components.

2. Next, you will need to setup a Terraform template (*.tf file) in the root directory for the cloud provider ofyour choices. See the following links for more information:

OpenStack Mantl uses Terraform to provision hosts in OpenStack. You can download Terraform from ter-raform.io50.

This project provides a number of playbooks designed for doing host maintenance tasks on OpenStack hosts.You can find them in playbooks/ in the main project directory.

Configuring OpenStack authentication Before we can build any servers using Terraform and Ansible, weneed to configure authentication. We’ll be filling in the authentication variables for the template located atterraform/openstack-modules.sample.tf. It looks like this:

50https://www.terraform.io/downloads.html

2.3. Checking your deployment 53

Page 58: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

variable subnet_cidr { default = "10.0.0.0/24" }variable public_key { default = "/home/you/.ssh/id_rsa.pub" }variable ssh_user { default = "cloud-user" }

variable name { default = "mantl" } # resources will start with "mantl-"variable control_count { default = "3"} # mesos masters, zk leaders, consul serversvariable worker_count { default = "5"} # worker nodesvariable kubeworker_count { default = "2"} # kubeworker nodesvariable edge_count { default = "2"} # load balancer nodes

# Run 'nova network-list' to get these names and values# Floating ips are optionalvariable external_network_uuid { default = "uuid-of-your-external-network" }variable floating_ip_pool { default = "name-of-your-floating-ip-pool" }

# Run 'nova image-list' to get your image namevariable image_name { default = "your-CentOS-7" }

# DNS servers passed to Openstack subnetvariable dns_nameservers { default = "" } # comma separated list of ips, e.g. "8.8.8.8,8.8.4.4"

# Openstack flavors control the size of the instance, i.e. m1.xlarge.# Run 'nova flavor-list' to list the flavors in your environment# Below are typical settings for mantlvariable control_flavor_name { default = "your-XLarge" }variable worker_flavor_name { default = "your-Large" }variable kubeworker_flavor_name { default = "your-Large" }variable edge_flavor_name { default = "your-Small" }

# Size of the volumesvariable control_volume_size { default = "50" }variable worker_volume_size { default = "100" }variable edge_volume_size { default = "20" }

module "ssh-key" {source = "./terraform/openstack/keypair_v2"public_key = "${var.public_key}"keypair_name = "mantl-key"

}

#Create a network with an externally attached routermodule "network" {source = "./terraform/openstack/network"external_net_uuid = "${var.external_network_uuid}"subnet_cidr = "${var.subnet_cidr}"name = "${var.name}"dns_nameservers = "${var.dns_nameservers}"

}

# Create floating IPs for each of the roles# These are not required if your network is exposed to the internet# or you don't want floating ips for the instances.module "floating-ips-control" {source = "./terraform/openstack/floating-ip"count = "${var.control_count}"floating_pool = "${var.floating_ip_pool}"

}

54 Chapter 2. General Information about Mantl with Ansible

Page 59: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

module "floating-ips-worker" {source = "./terraform/openstack/floating-ip"count = "${var.worker_count}"floating_pool = "${var.floating_ip_pool}"

}

module "floating-ips-kubeworker" {source = "./terraform/openstack/floating-ip"count = "${var.kubeworker_count}"floating_pool = "${var.floating_ip_pool}"

}

module "floating-ips-edge" {source = "./terraform/openstack/floating-ip"count = "${var.edge_count}"floating_pool = "${var.floating_ip_pool}"

}

# Create instances for each of the rolesmodule "instances-control" {source = "./terraform/openstack/instance"name = "${var.name}"count = "${var.control_count}"role = "control"volume_size = "${var.control_volume_size}"network_uuid = "${module.network.network_uuid}"floating_ips = "${module.floating-ips-control.ip_list}"keypair_name = "${module.ssh-key.keypair_name}"flavor_name = "${var.control_flavor_name}"image_name = "${var.image_name}"ssh_user = "${var.ssh_user}"

}

module "instances-worker" {source = "./terraform/openstack/instance"name = "${var.name}"count = "${var.worker_count}"volume_size = "${var.worker_volume_size}"count_format = "%03d"role = "worker"network_uuid = "${module.network.network_uuid}"floating_ips = "${module.floating-ips-worker.ip_list}"keypair_name = "${module.ssh-key.keypair_name}"flavor_name = "${var.worker_flavor_name}"image_name = "${var.image_name}"ssh_user = "${var.ssh_user}"

}

module "instances-kubeworker" {source = "./terraform/openstack/instance"name = "${var.name}"count = "${var.kubeworker_count}"volume_size = "100"count_format = "%03d"role = "kubeworker"network_uuid = "${module.network.network_uuid}"floating_ips = "${module.floating-ips-kubeworker.ip_list}"keypair_name = "${module.ssh-key.keypair_name}"

2.3. Checking your deployment 55

Page 60: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

flavor_name = "${var.kubeworker_flavor_name}"image_name = "${var.image_name}"ssh_user = "${var.ssh_user}"

}

module "instances-edge" {source = "./terraform/openstack/instance"name = "${var.name}"count = "${var.edge_count}"volume_size = "${var.edge_volume_size}"count_format = "%02d"role = "edge"network_uuid = "${module.network.network_uuid}"floating_ips = "${module.floating-ips-edge.ip_list}"keypair_name = "${module.ssh-key.keypair_name}"flavor_name = "${var.edge_flavor_name}"image_name = "${var.image_name}"ssh_user = "${var.ssh_user}"

}

Copy that file in it’s entirety to the root of the project to start customization. NOTE: All configuration entriesneeds to be completed. In the next sections, we’ll explain how to obtain these settings.

You can also use this file as a base for further customization. For example, you can change the names of themodules to be specific to your environment. While we will explore the authentication variables in the nextsections, you will need to provide the region, flavor_name, and other such variables yourself. You canget these variables from the OpenStack command line tools. For example:

• glance image-list for image_name

• keystone tenant-list for tenant_id and tenant_name

• nova flavor-list for control_flavor_name and worker_flavor_name

Or use the appropriate OpenStack commands such as openstack project list or the commands below.

• openstack image list for image_name

• openstack network list for net_id

• openstack flavor list for control_flavor_name / worker_flavor_name

Generate SSH keys If you do not have ssh keys already, generate a new pair for use with the project. Youneed to add the path to this key (public_key) to the openstack.tf file.

ssh-keygen -t rsa -f /path/to/project/sshkey -C "sshkey"

Getting OpenStack tenant settings auth_url, tenant_name, and tenant_id are unique for eachOpenStack datacenter. You can get these from the OpenStack web console:

(a) Log Into the OpenStack web console and in the Manage Compute section, select “Access & Security”.

(b) Select the “API Access” tab.

(c) Click on the “Download the OpenStack RC File” button. We’ll use this file to set up authentication.

(d) Download the RC file for each Data Center you want to provision servers in. You may have to log intodifferent OpenStack web consoles.

56 Chapter 2. General Information about Mantl with Ansible

Page 61: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

Open the file that you just downloaded. We are interested in three of the environment variables that are exported:

export OS_AUTH_URL=https://my.openstack.com:5000/v2.0export OS_TENANT_ID=my-long-unique-idexport OS_TENANT_NAME="my-project"

Update your Terraform file with these values for the appropriate fields, and save the downloaded file for usingthe maintenance playbooks (you’ll just need to source the environment variables into your shell.)

OpenStack Security Group In order for terraform to apply correctly, you need to create a security group inopenstack for Mantl.

You can either login to the Web UI to perform this task or use the openstack commmand line interface as below.

openstack security group create <group_name>

Once your group is created, ensure you update the openstack.tf file accordingly.

OpenStack Username/Password The playbooks get Username/Password information via environment vari-ables:

OS_USERNAMEYour OpenStack username

OS_PASSWORDYour OpenStack password

Before running terraform or any playbooks, run the following command to to pull in your username and pass-word for Ansible to use, changing the file name and location to the location of your OpenStack RC file:

source ~/Downloads/my-project.rc

Note: The default OpenStack RC file will prompt for your password in order to set OS_PASSWORD.

Once you’re all set up there, run terraform get to prepare Terraform to provision your cluster,terraform plan to see what will be created, and terraform apply to provision the cluster. After-wards, you can use the instructions in getting started to install Mantl on your new cluster.

2.3. Checking your deployment 57

Page 62: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

Google Compute Engine New in version 1.0: multi-zone support and terraform modularization

As of Mantl 0.3 you can bring up Google Compute Engine environments using Terraform. Mantl uses Terraformto provision hosts. You can download Terraform from terraform.io51.

Configuring Google Compute Engine for Terraform Before we can build any servers using Terraform andAnsible, we need to configure authentication. We’ll be filling in the authentication variables for the templatelocated at terraform/gce.sample.tf. The beginning of it looks like this:

variable "control_count" { default = 3 }variable "datacenter" {default = "gce"}variable "edge_count" { default = 3}variable "image" {default = "centos-7-v20150526"}variable "long_name" {default = "mantl"}variable "short_name" {default = "mi"}variable "ssh_key" {default = "~/.ssh/id_rsa.pub"}variable "ssh_user" {default = "centos"}variable "worker_count" {default = 1}variable "zones" {default = "us-central1-a,us-central1-b"

}

provider "google" {account_file = ""credentials = "${file("account.json")}"project = "mantl-0000"region = "us-central1"

}

Copy that file in it’s entirety to the root of the project as gce.tf to start customization. In the next sections,we’ll explain how to obtain these settings.

Basic Settings project, region and zones are unique values for each project in Google Compute En-gine. project is available from the project overview page (use the Project ID not the Project Name.) You canselect which region and zones you want to use from any of the GCE zones (see the image below.) If you’re inthe United States, (region) us-central1 and (zones) us-central1-a,us-central1-b,us-central1-c are good choices.If you’re in Europe, europe-west1 and europe-west1-b,europe-west1-c might be your best bets. If you haven’tpreviously activated Compute Engine for your project, this is a good time to do it.

51https://www.terraform.io/downloads.html

58 Chapter 2. General Information about Mantl with Ansible

Page 63: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

If you don’t want to commit these values in a file, you can source them from the environment instead:

GOOGLE_PROJECTThe ID of a project to apply resources to.

GOOGLE_REGIONThe region to operate under.

image is the GCE image to use for your cluster instances. You can find image names under Images in theCompute Engine section of the GCP console.

ssh_username is the default user name for SSH access to your cluster hosts. This value will be dependenton the image that you use. Common values are centos or rhel.

datacenter is a name to identify your datacenter, this is important if you have more than one datacenter.

short_name is appended to the name tag and dns (if used) of each of the nodes to help better identify them.

control_count, edge_count and worker_count are the number of GCE instances that will get de-ployed for each node type.

control_type, edge_type and worker_type are used to specify the GCE machine type52 .

52https://cloud.google.com/compute/docs/machine-types/

2.3. Checking your deployment 59

Page 64: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

account.json Terraform also needs service account to be able to create and manage resources in yourproject. You can create one by going to the “Credentials” screen under “API Manager” in the GCP Product andServices menu. Service accounts are created under New credentials -> Service account key.

Note: You’ll need to be an account owner to create this file - if you’re not, ask your account owner to do thisstep for you.

You will either need to create an new service account or use an exisiting one. For this example we created onecalled terraform.

60 Chapter 2. General Information about Mantl with Ansible

Page 65: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

Once you’ve created your account, your browser will download a JSON file containing the credentials. Pointcredentials to the path you decide to store that file in. If you’re running Terraform from a Google Computeinstance with an associated service account, you may leave the credentials parameter blank.

Provisioning Once you’re all set up with the provider, customize your modules (for control_count,edge_count and worker_count). Make sure your local ssh-agent is running and your ssh key has beenadded, this is requrired by the Terraform provisioner. Run ssh-add ~/.ssh/id_rsa to add your ssh key.Run terraform get to prepare Terraform to provision your cluster, terraform plan to see what will becreated, and terraform apply to provision the cluster. Afterwards, you can use the instructions in gettingstarted to install Mantl on your new cluster.

Note: If you get the below when running terraform plan or apply, you will need to add : account_file ="" to the provider section of your gce.tf file.:

provider.google.account_fileEnter a value:

This is a know bug in older versions of terraform.

Terraform State Terraform stores the state53 of your infrastructure in a file called “terraform.tfstate”. Thisfile can be stored locally or in a remote54 location such as consul55. If you use the gce.sample.tf that isprovided, by default the state of all the modules are stored in local terraform.tfstate file at the root of this project.

53https://www.terraform.io/docs/state/index.html54https://www.terraform.io/docs/state/index.html55https://github.com/hashicorp/terraform/blob/master/state/remote/remote.go#L38

2.3. Checking your deployment 61

Page 66: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

Instead of storing the state for all the modules in one file, you might deploy the modules independently and havedifferent terraform.tfstate for each module (either locally or remote). This can help with blue/green deployments,or making sure you don’t accidently override more static parts of the infrastructure, such as the network.

In the gce.sample.tf we have included examples of how you would reference a remote state file for networkvariables.

To create terraform.tfstate locally for the network module, you would simply run terraform get,terraform plan and terraform apply in the terraform/gce/network/ directory. Then in yourgce.tf file you would want to comment out:

#module "gce-network" {# source = "./terraform/gce/network"# network_ipv4 = "10.0.0.0/16"#}

and uncomment:

resource "terraform_remote_state" "gce-network" {backend = "_local"config {

path = "./terraform/gce/network/terraform.tfstate"}

}

and change all the network_name variables for the nodes to be:

network_name = "${terraform_remote_state.gce-network.output.network_name}"

Ideally you would store the state remotely, but configuring that is outside the scope of this document. Thefollowing blog explains how to configure and use remote state, Terraform remote state56.

Configuring DNS with Google Cloud DNS You can set up your DNS records with Terraform:

DNS New in version 0.3.

Terraform lets you configure DNS for your instances. The DNS provider is loosely coupled from the serverprovider, so you could for example use the dnsimple provider for either OpenStack or AWS hosts, or use theGoogle Cloud DNS provider for DigitalOcean hosts.

Providers These are the supported DNS providers:

CloudFlare New in version 0.5.

Terraform can use CloudFlare to provide DNS records for your cluster, independent of which provider you useto provision your servers.

CloudFlare Username/Token The easiest way to configure credentials for CloudFlare is by setting environ-ment variables:

CLOUDFLARE_EMAILYour e-mail address for the CloudFlare account

CLOUDFLARE_TOKENThe CloudFlare token (found in the CloudFlare admin panel)

56http://blog.mattiasgees.be/2015/07/29/terraform-remote-state/

62 Chapter 2. General Information about Mantl with Ansible

Page 67: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

Alternatively, you can set up the CloudFlare provider credentials in your .tf file:

provider "cloudflare" {email = "the e-mail address for your CloudFlare account"token = "your CloudFlare token"

}

DNSimple New in version 0.3.

Terraform can use DNSimple to provide DNS records for your cluster, independent of which provider you useto provision your servers.

DNSimple Username/Token The easiest way to configure credentials for DNSimple is by setting environmentvariables:

DNSIMPLE_EMAILYour e-mail address for the DNSimple account

DNSIMPLE_TOKENThe DNSimple token (found in the DNSimple admin panel)

Alternatively, you can set up the DNSimple provider credentials in your .tf file:

provider "dnsimple" {token = "your dnsimple token"email = "your e-mail address for the dnsimple account"

}

GCP Cloud DNS Terraform can use google_dns_record_set resources to provide DNS records foryour cluster.

In addition to the normal DNS variables, you will need to specify the managed_zone parameter. You canfind your Managed Zone name in the GCP Netowrking Console.

2.3. Checking your deployment 63

Page 68: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

If you haven’t set up a managed zone for the domain you’re using, you can do that with Terraform as well, justadd this extra snippet in your .tf file:

resource "google_dns_managed_zone" "managed-zone" {name = "my-managed-zone"dns_name = "example.com."description "Managed zone for example.com."

}

In your gce.tf, you will want to enable the cloud-dns module:

module "cloud-dns" {source = "./terraform/gce/dns"control_count = "${var.control_count}"control_ips = "${module.control-nodes.control_ips}"domain = "mydomain.com"edge_count = "${var.edge_count}"edge_ips = "${module.edge-nodes.edge_ips}"lb_ip = "${module.network-lb.public_ip}"managed_zone = "my-cloud-dns-zone"short_name = "${var.short_name}"subdomain = "service"worker_count = "${var.worker_count}"worker_ips = "${module.worker-nodes.worker_ips}"

}

64 Chapter 2. General Information about Mantl with Ansible

Page 69: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

Route53 Terraform can use aws_route53_record resources to provide DNS records for your cluster.

In addition to the normal DNS variables, you will need to specify the hosted_zone_id parameter. You canfind your own hosted zone ID in your AWS Route 53 console.

Route53 uses your normal Amazon Web Services provider credentials.

# Example setup for an AWS Route53module "route53" {source = "./terraform/aws/route53/dns"control_count = "${var.control_count}"control_ips = "${module.control-nodes.ec2_ips}"domain = "my-domain.com"edge_count = "${var.edge_count}"edge_ips = "${module.edge-nodes.ec2_ips}"hosted_zone_id = "XXXXXXXXXXXX"short_name = "${var.short_name}"subdomain = ".dev"worker_count = "${var.worker_count}"worker_ips = "${module.worker-nodes.ec2_ips}"kubeworker_count = "${var.kubeworker_count}"kubeworker_ips = "${module.kubeworker-nodes.ec2_ips}"

}

DNS Records and Configuration The providers create a uniform set of DNS A records:

• {short-name}-control-{nn}.node{subdomain}.{domain}

• {short-name}-edge-{nn}.node{subdomain}.{domain}

• {short-name}-worker-{nnn}.node{subdomain}.{domain}

• {control}{subdomain}.{domain}

• *.{subdomain}.{domain}

For example, with short-name=mantl, domain=example.com, a blank subdomain, 3 control nodes, 4worker nodes, 2 Kubernetes worker nodes, and 2 edge nodes, that will give us these DNS records:

• mantl-control-01.node.example.com

• mantl-control-02.node.example.com

• mantl-control-03.node.example.com

• mantl-worker-001.node.example.com

• mantl-worker-002.node.example.com

• mantl-worker-003.node.example.com

• mantl-worker-004.node.example.com

2.3. Checking your deployment 65

Page 70: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

• mantl-kubeworker-001.node.example.com

• mantl-kubeworker-002.node.example.com

• mantl-edge-01.node.example.com

• mantl-edge-02.node.example.com

• control.example.com (pointing to control 1)

• control.example.com (pointing to control 2)

• control.example.com (pointing to control 3)

• *.example.com (pointing to edge node load balancer)

If you don’t want the DNS records hanging off the apex, you can specify the subdomain parameter to the DNSproviders, which will be inserted in the records just before the apex. For example, if subdomain=.mantl inthe previous config, the wildcard records would be *.mantl.example.com.

Warning: Due to a limitation in Terraform’s string support, the subdomain must begin with a period (forexample .mantl).

The node records are intended to be used to access each node individually for maintenance. You can access thefrontend web components of the Mantl cluster through control.example.com, which will direct you tothe rest of the stack.

You can use the wildcard records for load-balanced access to any app in Marathon. For example, if you havean app named test running in Marathon, you can access it at test.example.com. Please see the Traefikconfiguration for more details.

Configuration A good way to configure DNS is to move the values common to your cloud config and DNSconfig into separate variables. You can do that like this:

variable control_count { default = 3 }variable worker_count { default = 2 }variable kubeworker_count { default = 2 }variable edge_count { default = 2 }variable short_name { default = "mantl" }

Then use those variables in the module like this:

module "dns" {source = "./terraform/cloudflare"

control_count = "${var.control_count}"control_ips = "${module.do-hosts.control_ips}"domain = "mantl.io"edge_count = "${var.edge_count}"edge_ips = "${module.do-hosts.edge_ips}"short_name = "${var.short_name}"subdomain = ".do.test"worker_count = "${var.worker_count}"worker_ips = "${module.do-hosts.worker_ips}"kubeworker_count = "${var.kubeworker_count}"kubeworker_ips = "${module.do-hosts.kubeworker_ips}"

}

66 Chapter 2. General Information about Mantl with Ansible

Page 71: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

Configuration Variables Configuration is done with a set of consistent variables across the providers:

control_count, worker_count, kubeworker_count, and edge_countThe count of nodes for each role.

control_ips, worker_ips, kubeworker_ips, and edge_ipsA comma-separated list of IPs. The cloud provider modules all export this as control_ips,worker_ips, kubeworker_ips, and edge_ips as well, so you can plug it in like so:

control_ips = "${module.your-hosts.control_ips}"

domainThe top level domain to add the records to.

Example: mantl.io

short_nameThe same short name passed into the cloud provider, used to generate consistent names.

subdomainA path to put between the top-level domain and the generated records. Must begin with a period.

Example: .apps

control_subdomainThe name for the control group (to generate control.yourdomain.com.) By default, this iscontrol, but you can change it to whatever you’d like.

Amazon Web Services New in version 1.0: multi-az support and terraform modularization

As of Mantl 0.3 you can bring up Amazon Web Services environments using Terraform. You can downloadTerraform from terraform.io57.

Configuring Amazon Web Services for Terraform Before we can build any servers using Terraform andAnsible, we need to configure authentication. We’ll be filling in the authentication variables for the templatelocated at terraform/aws.sample.tf. The beginning of it looks like this:

variable "amis" {default = {

us-east-1 = "ami-6d1c2007"us-west-2 = "ami-d2c924b2"us-west-1 = "ami-af4333cf"eu-central-1 = "ami-9bf712f4"eu-west-1 = "ami-7abd0209"ap-southeast-1 = "ami-f068a193"ap-southeast-2 = "ami-fedafc9d"ap-northeast-1 = "ami-eec1c380"sa-east-1 = "ami-26b93b4a"

}}variable "availability_zones" {default = "a,b,c"

}variable "control_count" { default = 3 }variable "datacenter" {default = "aws-us-west-2"}variable "edge_count" { default = 2 }variable "region" {default = "us-west-2"}variable "short_name" {default = "mantl"}

57https://www.terraform.io/downloads.html

2.3. Checking your deployment 67

Page 72: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

variable "long_name" {default = "mantl"}variable "ssh_username" {default = "centos"}variable "worker_count" { default = 4 }variable "kubeworker_count" { default = 2 }variable "dns_subdomain" { default = ".dev" }variable "dns_domain" { default = "my-domain.com" }variable "dns_zone_id" { default = "XXXXXXXXXXXX" }variable "control_type" { default = "m3.medium" }variable "edge_type" { default = "m3.medium" }variable "worker_type" { default = "m3.large" }variable "kubeworker_type" { default = "m3.large" }

provider "aws" {region = "${var.region}"

}

Copy that file in it’s entirety to the root of the project as aws.tf to start customization. In the next sections,we’ll describe the settings that you need to configure.

Do not copy the text contents above into a file, if you do not have the terraform/aws.sample.tf file, you need toclone the mantl repository. Please note, newer versions of this file do not have “access_key” or “secret_key”lines, we automatically find your AWS credentials from Amazon’s new “AWS Credentials file” standard.

Store your credentials like below in a file called ~/.aws/credentials on Linux/Mac, or%USERPROFILE%\.aws\credentials on Windows.

[default]aws_access_key_id = ACCESS_KEYaws_secret_access_key = SECRET_KEY

If you do not have an AWS access key ID and secret key, then follow the “Creating an IAM User” section below.If you already have working AWS credentials, you can skip this step.

Creating an IAM User Before running Terraform, we need to supply it with valid AWS credentials. Whileyou could use the credentials for your AWS root account, it is not recommended58. In this section, we’ll covercreating an IAM User59 that has the necessary permissions to build your cluster with Terraform.

Note: You’ll need to have an existing AWS account with sufficient IAM permissions in order to follow along.If not, ask your account owner to perform this step for you.

First, sign in to your AWS Console and navigate to the Identity & Access Management (IAM)60 service.

58http://docs.aws.amazon.com/general/latest/gr/aws-access-keys-best-practices.html59http://docs.aws.amazon.com/IAM/latest/UserGuide/id.html60https://console.aws.amazon.com/iam/home

68 Chapter 2. General Information about Mantl with Ansible

Page 73: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

Next, navigate to the “Users” screen and click the “Create New Users” button.

You will be given the opportunity to create 5 different users on the next screen. For our purposes, we are justgoing to create one: “mantl”. Make sure that you leave the “Generate an access key for each user” optionchecked and click the “Create” button.

2.3. Checking your deployment 69

Page 74: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

On the next screen, you will be able to view and download your new Access Key ID and Secret Access Key.Make sure you capture these values in a safe and secure place as you will need them in the next section. Youwon’t be able to retrieve your secret key later (although you can generate a new one, if needed).

The next step is to grant permissions to your new IAM user. Navigate back to the “Users” section and then clickon the user name you just created. On the next screen, you will be able to manage the groups your user belongsto and to grant the permissions to view and modify AWS resources. For this example, we will not be usinggroups but that would be an option if you wanted to create multiple IAM users with the same permissions. Weare going to keep it simple and use a managed policy to grant the necessary permissions to our IAM user.

Click the “Attach Policy” button.

70 Chapter 2. General Information about Mantl with Ansible

Page 75: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

On the “Attach Policy” screen you will see a long list of pre-built permissions policies. You can either scrollthrough the list or use the search filter to find the policy named “AmazonEC2FullAccess”. Check the box nextto that policy and click the “Attach Policy” button.

2.3. Checking your deployment 71

Page 76: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

That’s it. At this point, your IAM user has sufficient privileges to provision your cluster with Terraform.

Note: Technically the “AmazonEC2FullAccess” managed policy grants more permissions than are actuallyneeded. If you are interested in configuring your IAM user with the minimum set of permissions to provision acluster, you can see the custom policy included at the bottom of this document.

Note: If you want to manage DNS with Route 53, you will need to attach a Route 53 policy as well.

Provider Settings access_key and secret_key are the required credentials needed by Terraform tointeract with resources in your AWS account. AWS credentials can be retrieved when creating a new account orIAM user. New keys can be generated and retrieved by managing Access Keys in the IAM Web Console. If youdon’t want to commit these values in the Terraform template, you can add them to your ~/.aws/credentials61 fileor source them from the environment instead:

AWS_ACCESS_KEY_IDThe AWS Access Key for a valid AWS account or IAM user with the necessary permissions.

AWS_SECRET_ACCESS_KEYThe AWS secret key.

61https://blogs.aws.amazon.com/security/post/Tx3D6U6WSFGOK2H/A-New-and-Standardized-Way-to-Manage-Credentials-in-the-AWS-SDKs

72 Chapter 2. General Information about Mantl with Ansible

Page 77: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

Note: As a best practice62, it is preferred that you use credentials for an IAM user with appropriate permissionsrather than using root account credentials.

region is the AWS region63 where your cluster will be provisioned. As an alternative to specifying regionin the file, it can be read from the environment:

AWS_DEFAULT_REGIONThe AWS region in which to provision cluster instances.

Basic Settings short_name is appended to the name tag and dns (if used) of each of the nodes to help betteridentify them. If you are planning to deploy multiple mantl clusters into the same AWS account, you’ll need tochange this (otherwise AWS items like ssh key names will conflict and the second ‘terraform plan‘ willfail)

• The defaults for the below settings will work out of the box in amazons US-WEST-1 Datacenter, changethem if you don’t want these defaults, or if you want larger VM’s for each of the Mantl nodes *

region is the name of the region64 where your cluster resources will be provisioned. As long as your control,worker and edge count is greater than 1, your nodes will be spread across the availability zones in your region.

availability_zones are the availability zones in your region that you want to deploy your EC2 instancesto.

source_ami is the EC2 AMI to use for your cluster instances. This must be an AMI id that is available in theregion your specified.

ssh_username is the default user name for SSH access to your cluster hosts. This value will be dependenton the source_ami that you use. Common values are centos or ec2-user.

datacenter is a name to identify your datacenter, this is important if you have more than one datacenter.

control_count, edge_count and worker_count are the number of EC2 instances that will get de-ployed for each node type.

control_type, edge_type and worker_type are used to specify the EC2 instance type65 for yourcontrol nodes and worker nodes and they must be compatible with the source_ami you have specified. Thedefault EC2 instance type is an m3.medium.

Security Setup Mantl doesn’t ship with default passwords or certs. For security, we have provided a script togenerate all the security configuration for your deployment.

Please run ./security_setup from the base of the mantl repository. This will generate certificates andother security tokens needed for the mantl deployment, as well as prompting you for a mantl admin password.

If you get an ‘Import’ error when running security setup, your local machine lacks certain python modules thatthe script needs. Please try ‘‘ pip install pyyaml ‘‘ and then re-run ./security_setup.

Provisioning Once you’re all set up with the provider, customize your modules (for control_count,worker_count, etc), run terraform get to prepare Terraform to provision your cluster, terraformplan to see what will be created, and terraform apply to provision the cluster.

After terraform apply has completed without errors, you’re ready to continue. Next, follow the instruc-tions at getting started to install Mantl on your new AWS VM’s

62http://docs.aws.amazon.com/general/latest/gr/aws-access-keys-best-practices.html63http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html64http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html65https://aws.amazon.com/ec2/instance-types/

2.3. Checking your deployment 73

Page 78: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

• The below sections are for more information / customization only. They are not required *

Terraform State Terraform stores the state66 of your infrastructure in a file called “terraform.tfstate”. Thisfile can be stored locally or in a remote67 location such as S3. If you use the aws.sample.tf that is provided,by default the state of all the modules are stored in local terraform.tfstate file at the root of this project.

Instead of storing the state for all the modules in one file, you might deploy the modules independently and havedifferent terraform.tfstate for each module (either locally or remote). This can help with blue/green deployments,or making sure you don’t accidentally override more static parts of the infrastructure such as a VPC.

In the aws.sample.tf we have included examples of how you would reference a remote state file for VPC vari-ables.

To create terraform.tfstate locally for the VPC module, you would simply run terraform get,terraform plan and terraform apply in the terraform/aws/vpc/ directory. Then in youraws.tf file you would want to comment out:

module "vpc" {source ="./terraform/aws/vpc"availability_zones = "${availability_zones}"short_name = "${var.short_name}"region = "${var.region}"

}

And uncomment:

#resource "terraform_remote_state" "vpc" {# backend = "_local"# config {# path = "./vpc/terraform.tfstate"# }# }

#availability_zones = "${terraform_remote_state.vpc.output.availability_zones}"#default_security_group_id = "${terraform_remote_state.vpc.output.default_security_group}"#vpc_id = "${terraform_remote_state.vpc.output.vpc_id}"#vpc_subnet_ids = "${terraform_remote_state.vpc.output.subnet_ids}"

Ideally you would store the state remotely, but configuring that is outside the scope of this document. This68 isa good explanation on how to configure and use remote state.

Custom IAM Policy At the time of this writing, the following IAM policy grants the minimal permissionsneeded to provision an AWS cluster with Terraform.

{"Version": "2012-10-17","Statement": [

{"Sid": "Stmt1433450536000","Effect": "Allow","Action": ["ec2:AssociateRouteTable","ec2:AttachInternetGateway","ec2:AttachVolume",

66https://www.terraform.io/docs/state/index.html67https://www.terraform.io/docs/state/index.html68http://blog.mattiasgees.be/2015/07/29/terraform-remote-state/

74 Chapter 2. General Information about Mantl with Ansible

Page 79: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

"ec2:AuthorizeSecurityGroupIngress","ec2:CreateInternetGateway","ec2:CreateRoute","ec2:CreateRouteTable","ec2:CreateSecurityGroup","ec2:CreateSubnet","ec2:CreateTags","ec2:CreateVolume","ec2:CreateVpc","ec2:DeleteInternetGateway","ec2:DeleteKeyPair","ec2:DeleteRouteTable","ec2:DeleteSecurityGroup","ec2:DeleteSubnet","ec2:DeleteVolume","ec2:DeleteVpc","ec2:DescribeImages","ec2:DescribeInstanceAttribute","ec2:DescribeInstances","ec2:DescribeInternetGateways","ec2:DescribeKeyPairs","ec2:DescribeNetworkAcls","ec2:DescribeRouteTables","ec2:DescribeSecurityGroups","ec2:DescribeSubnets","ec2:DescribeVolumes","ec2:DescribeVpcAttribute","ec2:DescribeVpcClassicLink","ec2:DescribeVpcs","ec2:DetachInternetGateway","ec2:DetachVolume","ec2:DisassociateRouteTable","ec2:ImportKeyPair","ec2:ModifyInstanceAttribute","ec2:ModifyVpcAttribute","ec2:ReplaceRouteTableAssociation","ec2:RevokeSecurityGroupEgress","ec2:RunInstances","ec2:TerminateInstances","elasticloadbalancing:*","iam:AddRoleToInstanceProfile","iam:CreateInstanceProfile","iam:CreateRole","iam:DeleteInstanceProfile","iam:DeleteRole","iam:DeleteRolePolicy","iam:DeleteServerCertificate","iam:GetInstanceProfile","iam:GetRole","iam:GetRolePolicy","iam:GetServerCertificate","iam:ListInstanceProfilesForRole","iam:PassRole","iam:PutRolePolicy","iam:RemoveRoleFromInstanceProfile","iam:UploadServerCertificate"

],"Resource": [

2.3. Checking your deployment 75

Page 80: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

"*"]

}]

}

For managing DNS with Route 53, you can use a policy like the following:

{"Version": "2012-10-17","Statement": [

{"Effect": "Allow","Action": ["route53:ChangeResourceRecordSets","route53:GetHostedZone","route53:ListResourceRecordSets"

],"Resource": "arn:aws:route53:::hostedzone/YOUR_ZONE_HOSTED_ID"

},{

"Effect": "Allow","Action": ["route53:GetChange"

],"Resource": "arn:aws:route53:::change/*"

}]

}

You would replace HOSTED_ZONE_ID with the hosted zone ID of your domain in Route 53.

3. Finally, you need to create a custom ansible playbook for your cluster. You can copy sample.yml to mantl.ymlin your root directory to get started.

Building a Docker Image

Now you’ll be able to build a docker image from the Dockerfile:

docker build -t mi .

In this example, we are tagging the image with the name mi which we will be using later in this guide.

Running a Container

Now we can run a container from our image to provision a new cluster. Before we do that, there are a couple of thingsto understand.

By default, our Terraform templates are configured with the assumption that you have an SSH public key calledid_rsa.pub in the .ssh folder of your home directory (along with a corresponding private key). Terraform usesthis to authorize your key on the cluster nodes that it creates. This provides you with SSH access to the nodes whichis required for the subsequent Ansible provisioning. The simplest way to handle this when running from a Dockercontainer is to mount your ~/.ssh folder in the container. You will see an example of this later in the document.

Another important thing to understand is how Terraform manages state69. Terraform uses a JSON formatted file tostore the state of your managed infrastructure. This state file is important as it will allow you to use Terraform to

69https://www.terraform.io/docs/state/index.html

76 Chapter 2. General Information about Mantl with Ansible

Page 81: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

plan, inspect, modify and destroy resources in your infrastructure. By default, Terraform writes state to a file calledterraform.tfstate in the same directory where you launched Terraform. Our Dockerfile is configured tostore the state in a Docker volume called /state. This will allow you to mount that volume so that you can easilyaccess the terraform.tfstate file to use for future Terraform runs.

Now we can use this information to run our container:

docker run -it -v ~/.ssh/:/ssh/ -v $PWD:/state mi

As discussed above we are launching a container from the mi image we created earlier, while mounting our lo-cal ~/.ssh/ to /ssh in the container, and our current directory to the container’s /state. Therefore, theterraform.tfstate files will be accessible from our local host directory after the run. Note that we are alsoallocating a TTY for the container process (using -it) so that we can enter our SSH key passphrase if necessary.

The container should launch and provision the cluster using the security.yml, Terraform template, and customplaybook that you configured in the Setup above.

Note: If you have customized your Terraform template to use a different SSH public key than the default~/.ssh/id_rsa.pub, you can specify the corresponding private key as an environment variable (SSH_KEY)when running the container. For example:

docker run -it -e SSH_KEY=/root/.ssh/otherpvtkey -v ~/.ssh/:/ssh/ -v$PWD:/state mi

2.3.2 Restarting your deployment

To restart your deployment and make sure all components are restarted and working correctly, use theplaybooks/reboot-hosts.yml playbook.

ansible-playbook playbooks/reboot-hosts.yml

2.3.3 Using a Docker Container to Provision your Cluster

You can also provision your cluster by running a docker container. See Using the Dockerfile for more information.

2.3. Checking your deployment 77

Page 82: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

78 Chapter 2. General Information about Mantl with Ansible

Page 83: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

CHAPTER 3

Components

Mantl is made up of a number of components which can be customized, generally using Ansible variables.

3.1 Calico

New in version 0.4.

Calico1 is used in the project to add the IP per container functionality. Calico connects Docker containers through IPno matter which worker node they are on. Calico uses etcd to distribute information about workloads, endpoints, andpolicy to each worker node. Endpoints are network interfaces associated with workloads. Calico is deployed in theDocker container on each worker node and managed by systemd. Any workload managed by Calico is registered as aservice in Consul.

Calico is not enabled by default. In order to run Calico, you should make a couple of changes to your mantl.yml.You will need to add the etcd role into the roles section for all hosts:

- hosts: all...roles:- common...- etcd

And you need to add the calico role to the role=worker hosts:

- hosts: role=workerroles:...- calico

3.1.1 Modes

Calico can run a public cloud environment that does not allow either L3 peering or L2 connectivity between Calicohosts. Calico will then route traffic between the Calico hosts using IP in IP mode. At this time, the full node-to-nodeBGP mesh is supported and configured in OpenStack only. Other cloud environments are set up with the IP in IPmode.

1https://www.projectcalico.org

79

Page 84: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

3.1.2 Mesos

We pass the environment variable DOCKER_HOST to the executor using the flag--executor_environment_variables (added in Mesos v0.23.0), and thus the subsequent tasks:

{"DOCKER_HOST": "localhost:2377"

}

This allows Calico to set up networking automatically by routing Docker API requests through the Powerstrip2 proxythat is running on port 2377 on each Mesos slave host.

3.1.3 Marathon

When you start containers on top of Marathon, you will need to add two environment variables to your JSON file:CALICO_IP and CALICO_PROFILE. You can assign an IP address to CALICO_IP explicitly or set auto andit will be allocated automatically. If the profile set with CALICO_PROFILE doesn’t exist, it will be created auto-matically. If you don’t provide the two variables, the Docker default network settings will be applied. The variableSERVICE_PORT is optional, it registers a service port in Consul for your application. You can make an SRV queryto return this port.

Example:

{"container": {"type": "DOCKER","docker": {

"image": "busybox"}

},"id": "testapp","instances": 1,"env": {"CALICO_IP": "auto","CALICO_PROFILE": "dev","SERVICE_PORT": "3000"

},"cpus": 0.1,"mem": 32,"uris": [],"cmd": "while sleep 10; do date -u +%T; done"

}

3.1.4 Consul

When you start a workload on Marathon with the proper environment variables as CALICO_IP andCALICO_PROFILE, the workload is registered in Consul as a service. The Powerstrip logic was extended in thiscase. The registered name is constructed in this way: MARATHON_APP_ID plus -direct suffix. For example, ifyou create a workload with the name of testapp, then the testapp-direct service will be registered in Consul.

Thus, you have the option to query Consul in two ways:

1. In order to obtain Docker host IP addresses where your workload is running:

2https://github.com/clusterhq/powerstrip

80 Chapter 3. Components

Page 85: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

dig @localhost -p 8600 testapp.service.consul

2. To resolve IP addresses from the Calico network:

dig @localhost -p 8600 testapp-direct.service.consul

In the above examples, adjust the .consul domain as needed if you customized it when building your cluster.

3.1.5 calicoctl

You can use the calicoctl command line tool to manually configure and start the Calico services, interact with theetcd datastore, define and apply network and security policies, and other.

Examples:

calicoctl helpcalicoctl statuscalicoctl profile show --detailedcalicoctl endpoint show --detailedcalicoctl pool show

3.1.6 Logging

All components log to directories under /var/log/calico inside the calico-docker container. By default this ismapped to the /var/log/calico directory on the host. Files are automatically rotated, and by default 10 files of1MB each are kept.

Variables

You can use these variables to customize your Calico installation. For more information, refer to the etcd configuration.

etcd_client_portPort for etcd client communication

Default: 2379

calico_networkContainers are assigned IPs from this network range

Default: 192.168.0.0/16

calico_profileEndpoints are added to this profile for interconnectivity

Default: dev

3.2 Chronos

New in version 0.1.

Chronos3 is a distributed and fault-tolerant scheduler that runs on top of Apache Mesos that can be used for joborchestration. You can think of it as distributed cron service. It supports custom Mesos executors as well as the defaultcommand executor. By default, Chronos executes sh (on most systems bash) scripts.

3http://mesos.github.io/chronos/

3.2. Chronos 81

Page 86: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

3.2.1 Installation

As of 1.1, Chronos is distributed as an addon for Mantl. After a successful initial build (from your customizedsample.yml), you can install it by running:

ansible-playbook addons/chronos.yml

It can take a few minutes before Chronos becomes available and healthy.

3.2.2 Accessing the Chronos User Interface

After Chronos has been successfully installed and initialized, it should be possible to access the user interface directlyfrom Mantl UI.

3.2.3 Default Configuration

The default configuration of the Chronos addon will require at 1 worker node with at least 1 CPU and 1 GB of memoryavailable.

3.2.4 Customizing your Installation

There are a number of configuration options available for Chronos (each documented in the Variables section below).

As an example, let’s say you wanted to run 3 Chronos instances for high-availability purposes and you wanted each tohave more CPU and memory allocated. To do this, create a new yaml file (chronos.yml, for example) that lookssomething like this:

---chronos_instances: 3chronos_cpus: 2.0chronos_mem: 2048.0

When you install the Chronos addon, you can tell ansible to use this yaml file to configure your installation:

ansible-playbook -e @chronos.yml addons/chronos.yml

3.2.5 Uninstalling the Chronos addon

Uninstalling the Chronos addon can be done with a single API call. For example:

export creds='admin:password'export url=https://mantl-control-01

# uninstall chronos frameworkcurl -sku $creds -XDELETE -d "{\"name\": \"chronos\"}" $url/api/1/install

You will need to adjust the creds and url variables with values that are applicable to your cluster.

3.2.6 Upgrading from 1.0

If you are upgrading from a Mantl 1.0 cluster that is already running Chronos, there is actually little reason to switchover to the addon version that runs in Marathon. Feel free to continue using your existing Chronos installation.

82 Chapter 3. Components

Page 87: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

However, if for some reason you want to switch, you can run the following steps to disable the existing Chronosinstall.

Warning: Please note that you will need to recreate any tasks you already have scheduled in Chronos. They willnot be automatically migrated.

ansible 'role=control' -s -m shell -a 'consul-cli service-deregister chronos'ansible 'role=control' -s -m shell -a 'rm /etc/consul/chronos.json'ansible 'role=control' -s -m service -a 'name=chronos enabled=no state=stopped'

The new method of installing Chronos requires a version of mantl-api later than 0.1.7. You can upgrade mantl-apimanually, or run a sample playbook from a more recent version of Mantl (after 1.0.4) to get it. After upgradingmantl-api, you can install the addon in the usual way:

ansible-playbook addons/chronos.yml

3.2.7 Variables

chronos_cassandra_portPort for Cassandra.

default: 9042

chronos_cassandra_ttlTTL for records written to Cassandra.

default: 31536000

chronos_cpusCPU shares to allocate to each Chronos instance.

default: 1.0

chronos_instancesNumber of Chronos instances to run.

default: 1

chronos_decline_offer_durationThe duration (milliseconds) for which to decline offers by default.

default: 5000

chronos_disable_after_failuresDisables a job after this many failures have occurred.

default: 0

chronos_failover_timeoutThe failover timeout in seconds for Mesos.

default: 604800

chronos_failure_retryNumber of ms between retries.

default: 60000

chronos_framework_nameThe framework name.

default: “chronos”

3.2. Chronos 83

Page 88: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

chronos_graphite_reporting_intervalGraphite reporting interval (seconds).

default: 60

chronos_hostnameThe advertised hostname stored in ZooKeeper so another standby host can redirect to this elected leader.

default: “$HOST”

chronos_idUnique identifier for the app consisting of a series of names separated by slashes.

default: “/chronos”

chronos_memMemory (MB) to allocate to each Chronos instance.

default: 1024.0

chronos_mesos_task_cpuNumber of CPUs to request from Mesos for each task.

default: 0.1

chronos_mesos_task_diskAmount of disk capacity to request from Mesos for each task (MB).

default: 256.0

chronos_mesos_task_memAmount of memory to request from Mesos for each task (MB).

default: 128.0

chronos_min_revive_offers_intervalDo not ask for all offers (also already seen ones) more often than this interval (ms).

default: 5000

chronos_reconciliation_intervalReconciliation interval in seconds.

default: 600

chronos_revive_offers_for_new_jobsWhether to call reviveOffers for new or changed jobs.

default: false

chronos_schedule_horizonThe look-ahead time for scheduling tasks in seconds.

default: 60

chronos_task_epsilonThe default epsilon value for tasks, in seconds.

default: 60

chronos_zk_timeoutThe timeout for ZooKeeper in milliseconds.

default: 10000

84 Chapter 3. Components

Page 89: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

3.3 Collectd

Collectd role for deploying Collectd.

3.3.1 Installation

As of 1.1.0, Collectd is distributed as an addon for Mantl. After a successful initial run from your customizedsample.yml, install it with ansible-playbook -e @security.yml addons/collectd.yml.

3.3.2 Variables

This role has the following global settings:

HostnameHostname to append to metrics

Default: {{ inventory_hostname }}

IntervalGlobal interval for sampling and sending metrics

Default: 10 seconds

This role enables the following Collectd plugins and settings:

cpuType: read Description: amount of time spent by the CPU in various states

diskType: read Description: performance statistics for block devices and partitions

dfType: read Description : file system usage information Default: exclude all system and obsure file system types

interfaceType: read Description: network interface throughput, packets/s, errors

loadType: read Description: system load

memoryType: read Description: physical memory utilization

processesType: read Description: number of processes grouped by state

swapType: read Description: amount of memory currently written to swap disk

uptimeType: read Description: system uptime

usersType: read Description: counts the number of users currently logged into the system

networkType: write Description: send metrics to collectd compatible receiver Default: Server "localhost""25826"

3.3. Collectd 85

Page 90: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

syslogType: write Description: write collectd logs to syslog Default: LogLevel "err"

3.4 Consul

New in version 0.1.

Consul4 is used in the project to coordinate service discovery, specifically using the inbuilt DNS server.

3.4.1 Upgrading

New in version 1.0.

Mantl 1.0 includes Consul v0.6.3. If you are running Mantl 0.5.1, you’ll need to run theplaybooks/upgrade-consul.yml playbook before reprovisioning your cluster to 1.0 in order to ensurea smooth upgrade.

Upgrades from releases prior to Mantl 0.5.1 have not been tested.

3.4.2 Variables

You can use these variables to customize your Consul installation. You’ll typically want to set at least consul_dc,consul_servers_group, and consul_gossip_key. These variables are roughly sorted from most com-monly used to least.

consul_dcIf set, consul will advertise this datacenter (default dc1)

consul_dc_groupThe group to look in for the local datacenter. Using the Terraform plugins, this should be dc=dcname, and itwill default to that with the current datacenter name.

consul_servers_groupGroup to configure join IPs from. For example, if this value is consul_servers, IPs will be calculated fromthe hosts in that group and added to the list of servers to join. Defaults to role=control.

consul_log_levelThe level of logging for the Consul agent. The available log levels are “trace”, “debug”, “info”, “warn”, and“err”.

Default: warn

consul_gossip_keyIf set, this is used to encrypt gossip communication between nodes. This is unset by default, but you reallyshould set one up. You can get a suitable key (16 bytes of random data encoded in base64) by running opensslrand 16 | base64.

consul_advertiseIP address Consul will advertise as available for other nodes to connect to. Defaults to the value ofprivate_ipv4 (from terraform inventory).

consul_is_serverWhether this node should be a server (true) or an agent (false). (default true)

4https://www.consul.io/

86 Chapter 3. Components

Page 91: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

consul_bootstrap_expectThe number of servers to expect to join the cluster before bootstrapping. This is used in place of a two-phasebootstrap (where one node bootstraps and then restarts as a regular server.) This is set by default to be the numberof servers in consul_servers_group, but can be changed where the situation warrants (for example if youhave many servers, you may want to set this to be a low number like 3.)

retry_joinAutomatically generated by the calculation described in consul_servers_group, but you can override itfor custom behavior.

consul_enable_tlsIf (true) use TLS to verify the authenticity of servers and clients. (default false)

consul_ca_fileFile name of a PEM-encoded certificate authority. Only used when consul_enable_tls is true.

consul_cert_fileFile name of a PEM-encoded certificate. Only used when consul_enable_tls is true.

consul_key_fileFile name of a PEM-encoded private key. Only used when consul_enable_tls is true.

3.5 Distributive

New in version 1.1.

Distributive5 is used in Mantl to run detailed, granular health checks for various services.

This role is run several times as a dependency for other roles.

3.5.1 Variables

You can use these variables to customize your Distributive installation.

distributive_intervalThe interval between running distributive checks. Default is “1m”

distributive_timeoutThe timeout for running distributive checks. Default is “30s”.

checklist_versionsThe version of the checklist package to install for the specified role. Defaults are different for each role, but areof the form e.g. consul: 0.2.4-1.

3.6 dnsmasq

The project uses dnsmasq6 to configure each host to use Consul for DNS.

This role also adds (as of Mantl 1.1) search paths for .consul and .node.consul. This means that you canaddress your nodes by their consul names directly: if you have a node named x, you can address it as x or asx.node.consul. Addressing services works similarly, e.g. kubernetes.service.consul.

5https://www.consul.io/6http://www.thekelleys.org.uk/dnsmasq/doc.html

3.5. Distributive 87

Page 92: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

3.6.1 Changes

Starting with version 1.0.4, dnsmasq no longer uses Google’s DNS (8.8.8.8 and 8.8.4.4), prefer-ring the cloud provider’s DNS. If you want to use the old behavior, add your preferred nameservers to/etc/resolv.conf.masq, where DNSMasq will look to load resolvers after a name is not found in Consul.

3.6.2 Variables

The dnsmasq role uses consul_servers_group and consul_dc_group defined in Consul.

mantl_dns_versionThe version of mantl-dns to install.

Default: 1.1.0

3.7 Docker

New in version 0.1.

Docker7 is a container manager and scheduler. In their words, it “allows you to package an application with all of itsdependencies into a standardized unit for software development.” Their site has more on what Docker is8. We use itin Mantl to ship units of work around the cluster, combined with Marathon‘s scheduling.

3.7.1 Variables

docker_sourceSpecify origin of docker packages.

Possible values: docker, redhat.

Default: docker‘ -- using packages from ‘‘Docker, inc.

docker_tcpOpen the Docker socket to TCP control?

Note that if TCP/TLS are enabled after your cluster is already built, you’ll run intohttps://github.com/docker/docker/issues/17902 and have to restart all your containers manually:

:: $ ansible –become all -a ‘systemctl start nginx-consul’ $ ansible –become role=control -a ‘systemctlstart nginx-mantlui’ $ ansible –become role=control -a ‘systemctl start kubelet’ $ ansible –becomerole=kubeworker -a ‘systemctl start kubelet’

Default: false

docker_tcp_tlsProtect the Docker socket with TLS authentication. TLS should always be used in conjunction withdocker_tcp.

Defaults to the value of docker_tcp.7https://www.docker.com/8https://www.docker.com/what-docker

88 Chapter 3. Components

Page 93: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

3.7.2 Using a private Docker registry

In addition to the open, official Docker registry at https://hub.docker.com, one can use a private in-house dockerregistry for storing and pulling images. Docker Hub also supports storing private images.

One must configure credentials in order to use private repositories. This is done with the security-setup script, runit with the flag --use-private-docker-registry=true. It will then ask you for username, password ande-mail address for the registry user. You can also specify a custom URL for an in-house Docker registry, or omit it, inwhich case it will default to the official registry, https://index.docker.io/v1/.

In addition to configuring access to the specified private docker registries, this will also create an archive that can beused to supply credentials to Marathon tasks that require access to those registries. You can specify the path to thegenerated credentials archive in the uris key of your Marathon app definition:..."uris": [

"file:///etc/docker.tar.gz"],...

See the Marathon documentation9 for more information.

3.8 ELK

New in version 1.0.

The ELK role is a meta role that combines Elasticsearch, Logstash, and Kibana to provide automatic log shippingfrom all Mantl nodes to an Elasticsearch cluster. Kibana is available to visualize and analyze this data.

This role runs an Elasticsearch cluster via the Elasticsearch Mesos Framework10. It also configures Logstash on allnodes to forward logs to that cluster. Finally, Kibana is run via the Kibana Mesos Framework11. It is configured totalk to an Elasticsearch client node (which acts as a smart load balancer for the Elasticsearch cluster) and includes adefault sample dashboard.

3.8.1 Installation

As of 1.0, the ELK stack is distributed as an addon for Mantl. After a successful initial run (from your customizedsample.yml), install it with ansible-playbook -e @security.yml addons/elk.yml. It can takeseveral minutes for all components to deploy and become healthy.

3.8.2 Accessing User Interfaces

After the Elasticsearch framework and the Kibana application have been successfully installed and initialized, it shouldbe possible to access their corresponding user interfaces directly from Mantl UI.

3.8.3 Default Configuration

The default configuration of the ELK stack will require at least 4 worker nodes, each having at least 1 full CPU and 1GB of memory available to Mesos. In addition, each worker node will need to have at least 5 GBs of free disk space.

9https://mesosphere.github.io/marathon/docs/native-docker-private-registry.html10https://github.com/mesos/elasticsearch11https://github.com/mesos/kibana

3.8. ELK 89

Page 94: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

While a cluster of this size will be sufficient to evaluate and test the ELK stack on Mantl, we encourage you to reviewthe configuration variables below to size the cluster as appropriate for your environment.

3.8.4 Customizing your Installation

The size of your elasticsearch cluster is controlled by the variables documented below. As an example, let’s say thatyou just wanted to stand up an ELK stack on a small cluster for evaluation purposes. You only want to run a singlenode since you are not worried about high availability or data safety in this scenario. To do this, create a new yaml file(elasticsearch.yml, for example) that looks something like this:

---elasticsearch_ram: 512elasticsearch_executor_ram: 512elasticsearch_cpu: 0.5elasticsearch_executor_cpu: 0.5elasticsearch_nodes: 1

In this example, we are configuring both the Elasticsearch framework scheduler and the Elasticsearch nodes (executors)to each use 512 MB of memory and a half a CPU each. We are also indicating that we only want a single Elasticsearchnode launched in the cluster.

When you install the ELK addon, you can tell ansible to use this yaml file to configure your installation:

ansible-playbook -e @security.yml -e @elasticsearch.yml addons/elk.yml

With this configuration, Kibana and the Elasticsearch client node will still be deployed with their default configura-tions. Of course, you can customize further as needed.

3.8.5 Kibana deployment

By default, Kibana will be run via the Kibana Mesos framework. It is also possible to run Kibana on Marathon.You can control this by setting the kibana_package variable. Set it to kibana to run Kibana via Marathon andkibana-mesos (the default) to run it via the Mesos framework.

3.8.6 Uninstalling the ELK Addon

You can uninstall the ELK stack with the following command:

ansible-playbook -e @security.yml -e 'elk_uninstall=true' addons/elk.yml

This will remove the Elasticsearch framework, the Elasticsearch client node, and Kibana from your cluster. By default,the Elasticsearch data directories will not be removed. If you do not need to preserve your Elasticsearch data, you canset the elasticsearch_remove_data variable to true when you run the uninstall:

ansible-playbook -e @security.yml -e 'elk_uninstall=true elasticsearch_remove_data=true' addons/elk.yml

3.8.7 Upgrading

You do not need to re-install the addon on an existing pre-1.1 Mantl cluster that is already running the ELK addon.The existing addon should continue running fine on 1.1. If you do wish to switch to the updated addon, you shoulduninstall the Elasticsearch framework and disable Kibana on your control nodes (see the 1.0.3 uninstall instructionsbelow) prior to re-installing the addon. It will be up to you to backup and migrate your Elasticsearch data in thisscenario.

90 Chapter 3. Components

Page 95: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

3.8.8 Uninstalling the Elasticsearch Framework (1.0.3)

Uninstalling the Elasticsearch framework involves several steps. Below are examples of the commands that youcan run to completely remove the framework from your cluster. You will need to adjust the creds, url, andcontrol_node variables to values that are applicable to your cluster. You will also need to have the jq12 utilityinstalled to follow this example.

export creds='admin:password'export url=https://mantl-control-01export control_node=mantl-control-01

# remove scheduler from marathoncurl -sku $creds -XDELETE $url/marathon/v2/apps/elasticsearch

# find the mesos framework idframeworkId=$(curl -sku $creds $url/api/1/frameworks | jq -r '.[] | select(.name == "elasticsearch") | .id')

# remove the mesos frameworkcurl -sku $creds -XDELETE $url/api/1/frameworks/$frameworkId

# clean up mesos framework state from zookeeperansible $control_node -s -m shell -a 'zookeepercli -servers zookeeper.service.consul -force -c rmr /elasticsearch'

# delete all elasticsearch data (optional)ansible 'role=worker' -s -m shell -a 'rm -rf /data'

3.8.9 Uninstalling Kibana (1.0.3)

On Mantl 1.0.3, we do not have an uninstall process for Kibana. However, it is easy to disable it on your cluster. Thefollowing commands can be run to disable Kibana:

ansible 'role=control' -s -m shell -a 'consul-cli service-deregister kibana'ansible 'role=control' -s -m shell -a 'rm /etc/consul/kibana.json'ansible 'role=control' -s -m service -a 'name=kibana enabled=no state=stopped'

3.8.10 Variables

elasticsearch_ramThe amount of memory to allocate to the Elasticsearch scheduler instance (MB).

default: 1024

elasticsearch_java_optsThe JAVA_OPTS value that should be set in the environment.

default: -Xms1g -Xmx1g

elasticsearch_executor_ramThe amount of memory to allocate to each Elasticsearch executor instance (MB).

default: 1024

elasticsearch_diskThe amount of Disk resource to allocate to each Elasticsearch executor instance (MB).

default: 512012https://stedolan.github.io/jq/

3.8. ELK 91

Page 96: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

elasticsearch_cpuThe amount of CPU resources to allocate to the Elasticsearch scheduler.

default: 1.0

elasticsearch_executor_cpuThe amount of CPU resources to allocate to each Elasticsearch executor instance.

default: 1.0

elasticsearch_nodesNumber of Elasticsearch executor instances.

default: 3

elasticsearch_cluster_nameThe name of the Elasticsearch cluster.

default: “mantl”

elasticsearch_serviceThe name of the service that is registered in Consul when the framework is deployed. This needs to matchwhat would be derived via mesos-consul. For example, when elasticsearch_framework_name is setto mantl/elasticsearch, the service name should be elasticsearch-mantl.

default: “elasticsearch-mantl”

elasticsearch_executor_nameThe name of the executor tasks in Mesos.

default: “elasticsearch-executor-mantl”

elasticsearch_framework_versionThe version of the Elasticsearch mesos framework.

default: “1.0.1”

elasticsearch_framework_nameThe name of the Elasticsearch mesos framework.

default: “mantl/elasticsearch”

elasticsearch_framework_ui_portThe port that the Elasticsearch framework user interface listens on.

default: 31100

elasticsearch_client_idThe id of the elasticsearch-client application in Marathon.

default: “mantl/elasticsearch-client”

elasticsearch_client_serviceThe name of the service that is registered in Consul when the Elasticsearch client node is deployed. This needsto match what would be derived via mesos-consul. For example, when elasticsearch_client_id is setto mantl/elasticsearch-client, the service name should be elasticsearch-client-mantl.

default: “elasticsearch-client-mantl”

elasticsearch_client_elasticsearch_serviceThe name of the service registered in Consul for the Elasticsearch client node to connect to.

default: “transport_port.{{ elasticsearch_executor_name }}”

92 Chapter 3. Components

Page 97: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

elasticsearch_client_client_portThe HTTP port for the Elasticsearch client node to listen on.

default: 9200

elasticsearch_client_transport_portThe transport port for the Elasticsearch client node to listen on.

default: 9300

elasticsearch_client_cpuThe amount of CPU resources to allocate to the Elasticsearch client node.

default: 0.5

elasticsearch_client_ramThe amount of memory to allocate to the Elasticsearch client node (MB).

default: 512

elasticsearch_client_java_optsThe JAVA_OPTS value that should be set in the environment.

default: -Xms1g -Xmx1g

kibana_packageThe name of the package to use for the Kibana deployment. When set to kibana-mesos, the Kibana Mesosframework will be used. When set to kibana, Kibana will deployed in a Docker container running in Marathon.

default: kibana-mesos

kibana_idThe id of the Kibana application in Marathon (Kibana on Marathon).

default: mantl/kibana

kibana_serviceThe name of the service that is registered in Consul when Kibana is deployed. This needs to match what wouldbe derived via mesos-consul. For example, when kibana_id is set to mantl/kibana, the service nameshould be kibana-mantl (Kibana on Marathon).

default: kibana-mantl

kibana_imageThe Docker image to use for Kibana (Kibana on Marathon).

default: ciscocloud/mantl-kibana:4.3.2

kibana_elasticsearch_serviceThe name of the Elasticsearch service registered in Consul for the Kibana instance to connect to (Kibana onMarathon).

default: “{{ elasticsearch_client_service }}”

kibana_cpuThe amount of CPU resources to allocate to each Kibana instance (Kibana on Marathon).

default: 0.5

kibana_ramThe amount of memory to allocate to each instance of Kibana (MB) (Kibana on Marathon).

default: 512

3.8. ELK 93

Page 98: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

kibana_instancesThe number of Kibana instances to run (Kibana on Marathon).

default: 1

kibana_mesos_idThe id of the Kibana framework application in Marathon (Kibana Mesos framework).

default: mantl/kibana

kibana_mesos_framework_nameThe name of the Kibana Mesos framework (Kibana Mesos framework).

default: kibana-mantl

kibana_mesos_serviceThe name of the service that is registered in Consul when the Kibana framework is deployed. This needsto match what would be derived via mesos-consul. For example, when kibana_mesos_id is set tomantl/kibana, the service name should be kibana-mantl (Kibana Mesos framework).

default: kibana-mantl

kibana_mesos_imageThe Docker image to use for Kibana (Kibana Mesos framework).

default: ciscocloud/mantl-kibana:4.3.2

kibana_mesos_elasticsearch_serviceThe name of the Elasticsearch service registered in Consul for the Kibana instance to connect to (Kibana Mesosframework).

default: “{{ elasticsearch_client_service }}”

kibana_mesos_kibana_serviceThe name of the Kibana service registered in Consul (Kibana Mesos framework).

default: “{{ kibana_mesos_framework_name }}-task”

kibana_mesos_scheduler_cpuThe amount of CPU resources to allocate to the Kibana framework scheduler (Kibana Mesos framework).

default: 0.2

kibana_mesos_scheduler_ramThe amount of memory to allocate to the Kibana framework scheduler (MB) (Kibana Mesos framework).

default: 256

kibana_mesos_executor_cpuThe amount of CPU resources to allocate to each Kibana executor instance (Kibana Mesos framework).

default: 0.5

kibana_mesos_executor_ramThe amount of memory to allocate to each Kibana executor instance (MB) (Kibana Mesos framework).

default: 512

kibana_mesos_instancesThe number of Kibana executors to launch (Kibana Mesos framework).

default: 1

94 Chapter 3. Components

Page 99: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

3.9 etcd

New in version 0.4.

etcd13 is used in the project by Calico to distribute information about workloads, endpoints, and policy to each Dockerhost. It’s run in a Docker container on each host and managed by systemd.

3.9.1 Variables

You can use these variables to customize your etcd installation. Beware, you will need to update ETCD_AUTHORITYin the Calico role as well.

etcd_client_portPort for etcd client communication

Default: 2379

etcd_peer_portPort for etcd server-to-server communication

Default: 2380

3.10 GlusterFS

New in version 0.4.

Gluster14 implements a distributed filesystem. It is used for container volume management and syncing around thecluster.

Current Version: 3.7.6

3.10.1 Installation

As of 0.5.1, GlusterFS is distributed as an addon for Mantl. After a successful initial run (from your customizedsample.yml), install it with ansible-playbook -e @security.yml addons/glusterfs.yml.

3.10.2 Restarting

There is a bug with the current implementation where the glusterd servers will not come up after a restart, but they’llbe fine to start once the restart is complete. To do this after a restart, run:

ansible -m command -a 'sudo systemctl start glusterd' role=control

You will also need to mount the disks after this operation:

ansible -m command -a 'sudo mount -a' role=control

13https://github.com/coreos/etcd14http://www.gluster.org/

3.9. etcd 95

Page 100: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

3.10.3 Use with Docker

Any Docker volume should be able to access data inside the /mnt/container-volumes partition. Because ofSELinux, the volume label needs to be updated within the container. You can use the z flag to do this, as in thisexample which will open up a prompt in a container where the volume is mounted properly at /data:

docker run --rm -it -v /mnt/container-volumes/test:/data:z gliderlabs/alpine /bin/sh

3.10.4 Cloud Configuration

On Google Compute Engine, Amazon Web Services, and OpenStack the Mantl Terraform modules will cre-ate an external volume. By default, this volume will be 100gb, but you can change this with the Terraformglusterfs_volume_size variable. The attached disk will be formatted as an XFS volume and mounted onthe control nodes.

3.10.5 Variables

glusterfs_versionThe version of GlusterFS to download

default: 3.7.6

glusterfs_modeThe mode that GlusterFS will be configured in. Valid values: “server” and “client”.

default: client

glusterfs_replicationThe amount of replication to use for new volumes. Should be a factor of the number of nodes in the server group

default: the number of control nodes present in the server group

glusterfs_server_groupA selector for a group to use as Gluster servers.

default: role=control

glusterfs_brick_mountWhere the Gluster external disk will be mounted on supported cloud providers.

default: /mnt/glusterfs

glusterfs_brick_deviceAutomatically calculated depending on which cloud provider you are using. This should only be changed ifyou’re adding support for a new cloud provider or know very well where your volume is going to be located.

default: automatically generated

glusterfs_volume_forceWhether the glusterfs volume should be force-created (that is, created with storage on the root partition.) Thisis true when not using a cloud provider that supports external block storage.

default: automatically generated “yes” or “no”

glusterfs_brick_locationThe area in the filesystem to store bricks. It defaults to the value of glusterfs_brick_mount if an externaldisk is mounted, and /etc/glusterfs/data otherwise.

default: automatically generated

96 Chapter 3. Components

Page 101: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

gluserfs_volumesA list of names and mounts for volumes. The default looks like this:

glusterfs_volumes:- name: container-volumes

mount: /mnt/container-volumes

If you need to add any more volumes, be sure to include the container-volumes mount in the list, or thatvolume will not work on new servers.

3.11 Kafka

New in version 1.1.

This role installs the Kafka Mesos Framework15 and starts Kafka brokers.

3.11.1 Installation

After a successful initial run (from your customized sample.yml), you can install Kafka withansible-playbook -e @security.yml addons/kafka.yml. It can take several minutes for all com-ponents to deploy and become healthy.

3.11.2 Accessing the Kafka Mesos REST API

After the Kafka framework and the Kafka brokers have been successfully started and initialized, it should be possibleto access the Kafka Mesos REST API at /kafka on control nodes.

3.11.3 Default Configuration

The default configuration of the Kafka brokers will require at least 3 worker nodes that each have at least 4 CPUs and4 GBs of memory available to Mesos.

Depending on your planned environment, you may wish to customize the sizing of your Kafka cluster using thevariables documented below.

3.11.4 Installing Kafka Manager

Optionally, you can choose to install the Kafka Manager16 tool to help you manage your Kafka deployment. To do so,you can install the addon with the kafka_manager_install variable set to yes. For example:

ansible-playbook -e @security.yml -e 'kafka_manager_install=yes' addons/kafka.yml

3.11.5 Customizing your Installation

The size and configuration of your Kafka cluster is controlled by the variables documented below.

15https://github.com/mesos/kafka16https://github.com/yahoo/kafka-manager

3.11. Kafka 97

Page 102: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

3.11.6 Variables

kafka_scheduler_nameThe application ID of the Kafka scheduler in Marathon.

default: “mantl/kafka”

kafka_service_nameThe name of the service that is registered in Consul when the framework is deployed. This needs to match whatwould be derived via mesos-consul. For example, when kafka_scheduler_name is set to mantl/kafka,the service name should be kafka-mantl.

default: “kafka-mantl”

kafka_scheduler_cpuThe amount of CPU to allocate to the Kafka scheduler instance (MB).

default: 0.2

kafka_scheduler_memThe amount of memory to allocate to the Kafka scheduler instance (MB).

default: 512

kafka_broker_countThe number of Kafka brokers to start.

default: 3

kafka_broker_cpuThe amount of CPU to allocate to each Kafka broker.

default: 4

kafka_broker_memThe amount of memory to allocate to each Kafka broker (MB).

default: 4096

kafka_broker_heapThe amount of heap to allocate to each Kafka broker (MB).

default: 4096

kafka_broker_portThe port to bind to for the Kafka brokers.

default: 9092

kafka_broker_optionsThe Kafka options to pass to the brokers.

default:

•log.flush.interval.ms=10000

•log.flush.interval=1000

•num.recovery.threads.per.data.dir=1

•delete.topic.enable=true

•log.index.size.max.bytes=10485760

•num.partitions=8

•num.network.threads=3

98 Chapter 3. Components

Page 103: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

•socket.request.max.bytes=104857600

•log.segment.bytes=536870912

•log.cleaner.enable=true

•zookeeper.connection.timeout.ms=1000000

•log.flush.scheduler.interval.ms=2000

•log.retention.hours=72

•log.flush.interval.messages=20000

•log.dirs=/mantl/a/dfs-data/kafka-logs\,/mantl/b/dfs-data/kafka-logs\,/mantl/c/dfs-data/kafka-logs\,/mantl/d/dfs-data/kafka-logs\,/mantl/e/dfs-data/kafka-logs\,/mantl/f/dfs-data/kafka-logs

•log.index.interval.bytes=4096

•socket.receive.buffer.bytes=10485

•min.insync.replicas=2

•replica.lag.max.messages=10000000

•replica.lag.time.max.ms=1000000

•log.retention.check.interval.ms=3600000

•message.max.bytes=20480

•default.replication.factor=2

•zookeeper.session.timeout.ms=500000

•num.io.threads=8

•auto.create.topics.enable=false

•socket.send.buffer.bytes=1048576

•topic.flush.intervals.ms=5000

kafka_broker_jvm_optionsThe Kafka JVM options to pass to the brokers.

default:

•“-Dcom.sun.management.jmxremote”

•“-Dcom.sun.management.jmxremote.port=9010”

•“-Dcom.sun.management.jmxremote.local.only=false”

•“-Dcom.sun.management.jmxremote.authenticate=false”

•“-Dcom.sun.management.jmxremote.ssl=false”

kafka_manager_installIndicates whether or not to install the Kafka Manager tool.

default: no

kafka_manager_idThe id of the Kafka Manager application in Marathon.

default: mantl/kafka-manager

3.11. Kafka 99

Page 104: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

kafka_manager_service_nameThe name of the service that is registered in Consul when Kafka Manager is deployed. This needs tomatch what would be derived via mesos-consul. For example, when kafka_manager_id is set tomantl/kafka-manager, the service name should be kafka-manager-mantl.

default: kafka-manager-mantl

kafka_manager_instancesNumber of Kafka Manager instances.

default: 1

kafka_manager_cpuThe amount of CPU resources to allocate to each Kafka Manager instance.

default: 0.5

kafka_manager_memThe amount of memory to allocate to each Kafka Manager instance.

default: 1024

kafka_manager_load_balancerIndicates whether or not to expose the Kafka Manager on an edge node. Set to external if you wish toexpose Kafka Manager via Traefik. Be aware that this will mean the application is available externally withoutauthentication.

default: off

3.12 Kubernetes

New in version 1.1.

From Kubernetes.io17:

Kubernetes is an open-source system for automating deployment, operations, and scaling of containerizedapplications.

Since version 1.1, Mantl ships Kubernetes by default. All you need to do is set the kubeworker_count andkubeworker_type variables in your Terraform configuration (see the example Terraform configurations for wherethis variable integrates into the workflow.)

kubectl is installed and configured for the default SSH user on the control nodes. Please refer to the Kubernetes gettingstarted documentation18 for how to use Kubernetes.

To talk to the services launched inside Kubernetes, launch them with the NodePort service type (more on whatservice types are available19), then connect on the assigned port on any Kubernetes worker node. Consul serviceintegration will happen in a future release.

3.12.1 Running kubectl Remotely

If you have a local installation of kubectl, you can run kubectl config to configure access to your Mantl cluster.Here is an example:

17http://kubernetes.io18http://kubernetes.io/docs/hellonode/19https://aster.is/blog/2016/03/11/the-hamburger-of-kubernetes-service-types/

100 Chapter 3. Components

Page 105: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

kubectl config set-cluster mantl --server=https://control-node/kubeapi --insecure-skip-tls-verify=truekubectl config set-credentials mantl-basic-auth --username=admin --password=passwordkubectl config set-context mantl --cluster=mantl --user=mantl-basic-authkubectl config use-context mantl

You can set the value of the cluster and context names (mantl in the above example) as desired. In addtion, makesure you replace the value of control-node and password to values that are applicable for your cluster.

3.12.2 Cloud Provider Integration

Cloud provider integration is enabled by default for AWS and GCE clusters starting in Mantl 1.3. This means thatKubernetes can manage cloud-specific resources such as disk volumes and load balancers. If you wish to disable cloudprovider integration, set the variable enable_cloud_provider to false when building your cluster.

Note: If you are planning on destroying your cluster with terraform, you should first use kubectl or the KubernetesAPI to delete your Kubernetes-managed resources. Otherwise, it is possible that they will interfere with your abilityto successfully terraform destroy your cluster.

3.12.3 DNS Outline

Every node in the cluster hosts etcd and skydns instances. All DNS queries for the .local zone are resolved locally. If acontainer asks for name in .local domain, the request is routed through dnsmasq to skydns, which accesses data storedin etcd. Updates for container dns names are managed by kube2sky, which acts upon kubeapi events.

3.13 Logstash

Logging role for deploying and managing Logstash20 with Docker and systemd as a part of the ELK stack.

3.13.1 Variables

You can use these variables to customize your Logstash installations:

logstash_output_stdoutA simple output which prints to the STDOUT

Default: false

logstash_output_elasticsearchConfigures an elasticsearch output for the local Logstash agent on all nodes. You can set this variable to adictionary that maps to the Logstash Elasticsearch output21 settings.

This is not needed if you are planning to use the Mantl ELK addon. Use this if you want to send Logstash datato an Elasticsearch cluster that is not managed by Mantl.

Default: n/a

Example:

20http://logstash.net21https://www.elastic.co/guide/en/logstash/1.5/plugins-outputs-elasticsearch.html

3.13. Logstash 101

Page 106: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

logstash_output_elasticsearch:host: "elasticsearch.example.com"protocol: "http"

logstash_output_kafkaConfigures an elasticsearch output for a Kafka cluster. You can set this variable to a dictionary that maps to theLogstash Kafka output22 settings.

Default: n/a

Example:

logstash_output_kafka:broker_list: "broker1:port,broker2:port,..."topic_id: "logstash"

logstash_input_log4jRead events over a TCP socket from a Log4j SocketAppender

Default: false

logstsh_log4j_portTCP port

Default: 4560

3.14 Marathon

New in version 0.1.

Marathon23 is a scheduler for Mesos - it takes specifications for apps to run and lets you scale them up and down, anddeploy new versions or roll back. Like Mesos’ leader mode, Marathon can run on as many servers as you like and willelect a leader among nodes using ZooKeeper.

Keep Marathon servers close to Mesos leaders for best performance; they talk back and forth quite a lot to keep theservices in the cluster in a good state. Placing them on the same machines would work.

Marathon listens on port 8080. To connect to Marathon securely, set marathon_keystore_path andmarathon_keystore_password, then connect via HTTPS on port 8443.

The Marathon role also sets up mesos-consul24 and marathon-consul25 for service discovery.

3.14.1 Variables

marathon_http_credentialsHTTP Basic authentication credentials, in the form “user:password”.

marathon_keystore_pathPath on the local machine that contains a Java keystore. Marathon has docs on generating this file26. Please notethat if this option is set, marathon_keystore_password is required.

marathon_keystore_passwordPassword for the keystore specified in marathon_keystore_path.

22https://www.elastic.co/guide/en/logstash/1.5/plugins-outputs-kafka.html23http://mesosphere.github.io/marathon/24https://github.com/CiscoCloud/mesos-consul25https://github.com/CiscoCloud/marathon-consul26https://mesosphere.github.io/marathon/docs/ssl-basic-access-authentication.html

102 Chapter 3. Components

Page 107: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

marathon_principalPrincipal to use for Mesos framework authentication.

Note: If you plan to use framework authentication, be sure to add the principal and secret tomesos_credentials and set mesos_authenticate_frameworks to yes.

default: marathon

marathon_secretSecret to use for Mesos framework authentication. Authentication will only be enabled if this value is set to anon-blank value. See also the note in marathon_principal.

default: ""

mesos_consul_imageImage for the mesos-consul27 bridge.

Default: drifting/mesos-consul

mesos_consul_image_tagTag for the mesos-consul28 bridge

Default: latest

marathon_consul_imageImage for the marathon-consul29 bridge.

Default: brianhicks/marathon-consul

marathon_consul_image_tagTag for the marathon-consul30 bridge

Default: latest

marathon_logging_levelLog level for Marathon

Default: warn

mantl_api_imageThe mantl-api docker image.

Default: ciscocloud/mantl-api

mantl_api_image_tagThe tag for the mantl-api docker image.

Default: 0.2.2

mantl_api_config_urlThe url for a custom mantl-api configuration file. This url must be accessible from your cluster nodes. The filewill be downloaded into the Mesos sandbox for the mantl-api task.

Default: “”

mantl_api_config_fileThe path to the config file for mantl-api to read. This should be based on the file name of themantl_api_config_url variable above. For example, if you set mantl_api_config_url to

27https://github.com/CiscoCloud/mesos-consul28https://github.com/CiscoCloud/mesos-consul29https://github.com/CiscoCloud/marathon-consul30https://github.com/CiscoCloud/marathon-consul

3.14. Marathon 103

Page 108: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

http://somebucket.s3.amazonaws.com/mantl-api/config.toml, you would want to setmantl_api_config_file to $MESOS_SANDBOX/config.toml.

Default: “”

3.15 Mesos

New in version 0.1.

Mesos31 is the distributed system kernel that manages resources across multiple nodes. When combined withMarathon, you can basically think of it as a distributed init system.

3.15.1 Modes

Mesos can be run in one of two “modes”:

• A server mode (called “master” or “leader”)

• A client mode (called “follower” or “agent”. The term “slave” is used but deprecated.)

This project prefers the “leader/follower nomenclature”. In addition to the “official” modes described below,mesos_mode supports running both modes on a single machine for testing or development scenarios.

Leader

Leaders will communicate with each other via ZooKeeper to coordinate which leader controls the cluster. Because ofthis, you can run as many leader nodes as you like, but you should consider keeping an odd number in the cluster tomake attaining a quorum easier. A single leader node will also work fine, but will not be highly available.

Follower

Follower nodes need to know where the leaders are, and there can be any number of them. You should keep thefollower machines free of “heavier” services running outside Mesos, as this will cause inaccurate resource availabilitycounts in the cluster.

3.15.2 Upgrading

New in version 1.0.

If you are running Mantl 0.5.1, you’ll need to run the playbooks/upgrade-mesos-marathon.yml playbookbefore reprovisioning your cluster to 1.0. The packaging format changed in the 1.0 release, this will ensure a smoothupgrade.

Upgrades from releases prior to Mantl 0.5.1 have not been tested.

3.15.3 Variables

You can use these variables to customize your Mesos installation.

31https://mesos.apache.org/

104 Chapter 3. Components

Page 109: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

mesos_modeSet to leader for leader mode, and follower for follower mode. Set to mixed to run both leader andfollower on the same machine (useful for development or testing.)

default: follower

mesos_log_dirdefault: /var/log/mesos

mesos_work_dirdefault: /var/run/mesos

mesos_leader_portdefault: 5050

mesos_follower_portdefault: 5051

mesos_leader_cmddefault: mesos-master

mesos_follower_cmddefault: mesos-slave

mesos_isolationThe isolation level for tasks using the Mesos containerizer32. See the Mesos Configuration documentation33 formore information. If you wish to disable enforcement of cpu and memory resource limits for tasks, set this toposix/cpu,posix/mem.

default: cgroups/cpu,cgroups/mem

mesos_attributesSet attributes for mesos agents.Provide these as a list to set multiple attributes. Format:‘‘ - "key:value"- "key:value"‘‘

default: node_id:{{ inventory_hostname }}

mesos_resourcesSet resources for mesos agents. (useful for setting available ports that applications can be bound to). Providethese as a list to set multiple resources. Format: ‘‘- name(role):value

•name(role):value...‘‘

default: ports(*):[4000-5000, 7000-8000, 9000-10000, 25000-26000,31000-32000]

mesos_clusterdefault: mantl

mesos_zk_hostsA ZooKeeper connection string in the the host:mesos_zk_port format, generated from the hosts inzookeeper_server_group.

mesos_zk_dnsConsul DNS entries for ZooKeeper hosts.

default: zookeeper.service.consul

mesos_zk_portdefault: 2181

32http://mesos.apache.org/documentation/latest/mesos-containerizer/33http://mesos.apache.org/documentation/latest/configuration/

3.15. Mesos 105

Page 110: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

mesos_zk_chrootZooKeeper znode to use as a base for mesos data.

default: mesos

mesos_credentialsA list of credentials to add for authentication. These should be in the form { principal: "...",secret: "..." }.

default: []

mesos_authenticate_frameworksEnable Mesos authentication for frameworks. You should set mesos_credentials for credentials if this isset.

default: set automatically if framework credentials are present

mesos_authenticate_followersEnable Mesos authentication from followers. If set, each follower will need mesos_follower_secret setin their host variables.

default: set automatically if follower credentials are present

mesos_follower_principalThe principal to use for follower authentication

default: follower

mesos_follower_secretThe secret to use for follower authentication

default: not set. Set this to enable follower authentication.

mesos_logging_levelThe log level for Mesos. This is set for all components.

Default: WARNING

3.16 Traefik

New in version 0.5.

Traefik34 is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease. It supportsseveral backends (Docker, Mesos/Marathon, Consul, Etcd, Zookeeper, BoltDB, Rest API, file. . . ) to manage itsconfiguration automatically and dynamically.

Traefik is used as the only work role on the edge nodes. You should customize traefik_marathon_domainto set a domain (for example apps.yourdomain.com) and then set an A record for each of the edge servers to*.apps.yourdomain.com.

3.16.1 Variables

You can use these variables to customize your Traefik installation.

traefik_marathon_endpointThe endpoint that Marathon talks to. Do not change this unless you are using non-default security settings (namely, if you have iptables disabled, this could also be set tohttp://marathon.service.consul:8080)

34https://traefik.io/

106 Chapter 3. Components

Page 111: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

default: http://marathon.service.consul:18080

traefik_marathon_domainThe domain that Traefik will match hosts on by default (you can change this on a per-app basis35)

default: marathon.localhost

traefik_marathon_expose_by_defaultAutomatically expose Marathon applications in traefik.

The traefik default is false, or not forward traffic.

The mantl default is set to true.

3.17 ZooKeeper

New in version 0.1.

ZooKeeper36 is used for coordination among Mesos and Marathon nodes. Rather than storing things in this serviceyourself, you should prefer Consul.

3.17.1 Variables

You can use these variables to customize your ZooKepeer installation.

zk_idThe value of zk_id in the ZooKeeper configuration file. If not provided it will be set by the playbook.

zookeeper_servicedefault: zookeeper

zookeeper_envdefault: dev

zookeeper_ensembledefault: mantl

zookeeper_container_nameThe name that will be used for the container at runtime. Generated automatically from zookeeper_service,zookeeper_env, zookeeper_ensemble, and zk_id if not set.

zookeeper_data_volumeThe name of the data volume to store state in. Generated automatically from zookeeper_env,zookeeper_ensemble, and zk_id if not set.

zookeeper_docker_imagedefault: asteris/zookeeper

zookeeper_docker_tagdefault: latest

zookeeper_docker_portsPort arguments, which will be passed directly to docker. Opens TCP 2181, 2888, and 3888 by default.

default: "-p 2181:2181 -p 2888:2888 -p 3888:3888"

zookeeper_docker_envdefault: "/etc/default/{{ zookeeper_service }}"

35http://traefik.readthedocs.org/en/latest/backends/#marathon-backend36https://zookeeper.apache.org/

3.17. ZooKeeper 107

Page 112: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

zookeeper_log_thresholdLog level for ZooKeeper

default: WARN

zookeeper_log_retain_countNumber of zookeeper transaction logs and snapshots to keep.

default: 3

zookeeper_log_purge_intervalInterval in hours that zookeeper waits to purge transacton logs and snapshots.

default: 12

The project also includes a number of Ansible roles that multiple components can use:

3.18 Common

New in version 0.1.

The common role prepares a system with functionality needed between multiple roles. Specifically:

• sets timezone to UTC

• configures hosts for simple name resolution (before Consul DNS is set up)

• installs common software like pip

• adds users.

• adds SSL certificates created by security-setup to the root CA store

• does various workarounds for cloud providers

3.18.1 Variables

use_host_domainAdd a domain component to hosts in /etc/hosts

default: false

host_domainThe domain component to add to hosts in /etc/hosts

default: novalocal

3.19 consul-template

The consul-template role makes sure a good version of consul-template37 is present on the system for templating tasks.

37https://github.com/hashicorp/consul-template

108 Chapter 3. Components

Page 113: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

3.20 logrotate

Logrotate is used to rotate logs. Currently, the rotation interval is set to rotate daily for 7 days. The followingcomponents are rotated:

• Docker

• Mesos

• ZooKeeper

3.21 Nginx

Nginx38 is a web and proxy server. Mantl uses it in front of the Mesos, Marathon, and Consul web UIs to providebasic authentication and SSL. Those proxies are set up in the individual roles linked above, and the base nginx roleis just used to move the relevant certificates into place.

The following technology previews are also included. These may be used more fully in the future, but now just existfor preview purposes to gather feedback and build initial implementations against:

3.22 Vault

New in version 0.3.0.

Vault39 “secures, stores, and tightly controls access to tokens, passwords, certificates, API keys, and other secrets inmodern computing.” It is currently included as a technology demo in Mantl.

3.22.1 Variables

vault_default_portPort for Vault to listen on.

default: 8200

vault_command_optionsExtra options to pass to Vault at startup. The defaults allow both the client and server to authenticate one anotherwith their TLS certs.

default: --ca-cert=/etc/pki/CA/ca.cert --client-cert={{ host_cert }}--client-key={{ host_key }}

vault_init_jsonInitial JSON configuration for Vault.

default: {"secret_shares": 4, "secret_threshold": 3}

Mantl includes some logic that is provided via our own packaging system, and so is not visible in the Ansible roles.Here are the links to our package sources:

• mantl-packaging40

• mesos-packaging41

38http://nginx.org/39https://www.vaultproject.io/40https://github.com/asteris-llc/mantl-packaging41https://github.com/asteris-llc/mesos-packaging

3.20. logrotate 109

Page 114: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

110 Chapter 3. Components

Page 115: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

CHAPTER 4

Addons

Addons are Ansible roles or other software configurations that have been known to work well with Mantl. These arenot as tested and maintained as the core components (those listed in the sample playbook), so please allow for somemanual configuration.

All addon playbooks should be run after the initial installation of Mantl via the sample playbook. Documentation ontheir installation and customization is provided in their component READMEs.

111

Page 116: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

112 Chapter 4. Addons

Page 117: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

CHAPTER 5

Security

5.1 Security

5.1.1 Overview

Our goal is to make it easy to:

• Encrypt restricted and confidential information in transit

• Control access to resources

• Collect and monitor security-related events

• Manage security policies, credentials and certificates

Because we are working with different projects, the controls implemented in each component vary greatly.

Note: Mantl is not currently suitable for running untrusted code or multi-tenant workloads.

5.1.2 Focus Areas

The security model focuses on three areas:

• Provisioning (Ansible, Terraform)

• Securing system level components (Consul, Mesos, Marathon, etc.)

• Securing Applications that run on the platform

The next sections will deal with each area.

Provisioning

Note: SSH strict host key checking is turned off for now to make development against the project easier. When youset up a production instance of the project, you should change host_key_checking and record_host_keysin ansible.cfg

Provisioning security involves setting up a secure cloud environment, basic server security, and securing secrets on theprovisioning systems. Ansible1 and Terraform2 are our primary provisioning tools.

1https://www.ansible.com2https://www.terraform.io

113

Page 118: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

The following security features are implemented or on the roadmap:

• Automate provisioning of SSH key into the cloud host (version 0.1)

• Automate the creation of cloud networks (VPC) (version 0.3)

• Automate creation of network security groups (version 0.3)

• Create sudo administrative users and provide ssh access (version 0.1)

• Update Linux kernel & all packages on the node (version 0.1)

• Automate creation of passwords/SSL certificates #653 (version 0.2)

• Restrict memory usage of system Docker containers (version 1.2)

• Create unified TLS certificates for each node (version 1.3)

• Always verify every component’s API with its TLS certificate (ongoing)

• Auto-rotate TLS certificates (future)

• Store secrets in Vault (future)

• Provide scheduler integration with Vault (future)

The following items are currently not on the roadmap:

• Setting up LDAP servers

• Setting up a Kerberos environment

• Encrypting server disks for data at rest.

Credential Generation

The security-setup script has been created to automate the creation of credentials. Please refer to the security-setup script documentation.

Component Security

This area deals with the securing communication and access on the underlying components like Consul and Mesos.

HTTP authentication, and SSL/TLS

HTTP traffic to the management systems is managed via an nginx proxy that provides basic authentication and SSLtermination. For example, Consul binds to localhost:8500, and Nginx will bind to port 8500 on the networkinterface and forward traffic to localhost.

The web credentials are stored in the Consul K/V, and Consul-template is used to modify the Nginx authentication file.The long term roadmap of the project is to move more configuration into Consul and Vault, and out of our provisioningsystems.

Consul

Consul endpoints are secured with TLS certificates, and all incoming and outgoing connections are verified with TLS.Consul exec is disabled for security reasons. The default ACL policy is deny.

3https://github.com/CiscoCloud/mantl/issues/65

114 Chapter 5. Security

Page 119: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

Consul Template

Consul Template is used to dynamically configure components based on Consul K/V pairs. Consul Template is usedacross the environment. Consul Template is configured with TLS, and verifies all connections.

Docker

The daemon is not exposed to TCP by default, but can be configured to do so, while verifying incoming requests withTLS.

Warning: Never expose the Docker daemon to network traffic without securing it with TLS.

Marathon

Marathon supports both basic HTTP authentication and TLS. We place an authenticating proxy in front of the instance,using the same credentials as for the Mesos and Consul administrative accounts.

Marathon does not support Zookeeper authentication, so the Zookeeper znode must have world access. We expect thiswill change soon.

References: - ‘SSL and Basic Access

Authentication <https://github.com/Mesosphere/marathon/blob/master/docs/docs/ssl-basic-access-authentication.md>‘__

• Support Zookeeper Authentication4

Mesos

We currently support Mesos framework authorization, and will support SSL in the future (issue #1109).

Mesos Authorization5 allows control of the following actions: register_frameworks,shutdown_frameworks, run_tasks. Support for Mesos authorization is still being reviewed.

The following steps are taken to secure Mesos if security is enabled:

• On the leader nodes, the --authenticate flag is set

• On the leader nodes, the --authenticate_slaves flag is set

• A credential file is created and the --credential=/path is set on leaders and followers (version 0.2)

• Mesos nodes connect to zookeeper with a username:password (version 0.2)

• Zookeeper ACL created on the /Mesos znode: world read, Mesos full access (version 0.2)

References:

• Framework Authentication in Apache Mesos 0.15.06

4https://github.com/Mesosphere/marathon/issues/13365http://Mesos.apache.org/documentation/latest/authorization/6http://Mesos.apache.org/blog/framework-authentication-in-apache-Mesos-0-15-0/

5.1. Security 115

Page 120: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

Zookeeper

The main recommendation for securing Zookeeper is to use Kerberos, which is currently out of scope for the project.

Zookeeper supports ACLs7 on Znodes, but ACLs are not recursive.

SSL endpoints are supported via Netty, but the C client does not yet have SSL support ZOOKEEPER-21258

ZOOKEEPER-21229.

Compensating controls:

• We don’t store any restricted data within Zookeeper

• Implement ACLs and Authentication on the /Mesos znode using user digest. (version 0.2)

• Implement ACLs and Authentication on the /marathon znode using user digest. (version 0.3+, pendingsupport for Marathon zk authentication))

• Provide Stunnel encryption for Zookeeper Peer-to-Peer communication (version 0.3+)

• Develop dynamic firewall using Consul Template on Zookeeper ports (version 0.3)

• Update Marathon configuration to use zk user:password (future version)

• Update Mesos configuration to use zk user:password (version 0.2)

References:

• Setting ACLs & Auth in zookeeper10

5.1.3 Longer-term goals

Application SSL support

Enable developers to secure their applications with SSL.

Phase I: SSL support for wildcard DNS domains.

Phase II: SSL support for custom DNS domains

5.2 the security-setup script

The security-setup script is located in the root of the project. It will set up authentication and authorization foryou, as described in the component documentation. When components are updated, you can run it again, as manytimes as you want. It will only set the variables it needs to.

After you’ve set up security with the script, you can include it in your playbook runs by specifying the -e or--extra-vars option, like so:

ansible-playbook [email protected] your_playbook.yml

7http://zookeeper.apache.org/doc/r3.1.2/zookeeperProgrammers.html#sc_ZooKeeperAccessControl8https://issues.apache.org/jira/browse/ZOOKEEPER-21259https://issues.apache.org/jira/browse/ZOOKEEPER-2122

10https://ihong5.wordpress.com/2014/07/24/apache-zookeeper-setting-acl-in-zookeeper-client/

116 Chapter 5. Security

Page 121: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

5.2.1 Certificates

If not present, security-setup will create a root CA to generate certificates from. If you want to use your ownCA, add the key in ssl/private/cakey.pem and the cert in ssl/cacert.pem.

If you have your own (self)signed certificates, you can put them in ssl/private/your.key.pem andssl/certs/your.cert.pem. Just override the locations the script generates (for example the consul key and certwould be ssl/private/consul.key.pem and ssl/certs/consul.cert.pem, respectively) and they’llbe used instead of the generated files, and not overridden.

In the event that you need to regenerate a certificate, rename or delete the appropriate CSR and certificate from thecerts folder and the private component in private and re-run security-setup.

5.2.2 Options

Run security-setup --help to see a list of options with their default values. Options like --mesos take aboolean argument. You can use the following values in these options:

Value Interpreted ast TrueT True1 TrueTrue Truetrue Truef FalseF False0 FalseFalse Falsefalse False

5.2. the security-setup script 117

Page 122: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

118 Chapter 5. Security

Page 123: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

CHAPTER 6

Upgrading

6.1 Overview

Beginning with Mantl 1.0, our goal is to support a straightforward upgrade path from a cluster running a previousrelease.

However, upgrade support should be considered alpha at this time; it has not been extensively tested on productionclusters. Please use with caution and report any issues you have with the process.

6.2 Upgrading OS packages

We provide two playbooks for upgrading OS-level system packages on a cluster:playbooks/upgrade-packages.yml and playbooks/rolling-upgrade-packages.yml. Thefirst playbook upgrades all nodes on your cluster in parallel, and the second upgrades each node serially. You wantthe use the rolling upgrade on a cluster that is already running consul; otherwise, you will likely lose quorum anddestabilize your cluster.

6.3 Upgrading from 1.1 to 1.2

If you have a running 1.1 cluster, you need to perform the following steps:

6.3.1 Update security.yml

Mantl 1.2 requires an additional setting in the security.yml file that you generated when you built your cluster.To auto-generate the necessary settings, you simply need to re-run security-setup:

./security-setup

Of course, if you customized your security settings (manually or using the CLI arguments), you should be careful tore-run security-setup the same way.

6.3.2 Core Component Rolling Upgrade

ansible-playbook -e @security.yml playbooks/upgrade-1.1.yml

This playbook performs a rolling update of consul which is required to support new features in Mantl 1.2.

119

Page 124: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

6.3.3 Upgrade to Mantl 1.2

At this point, you can now upgrade the rest of the components to 1.2 with the standard provisioning playbook:

ansible-playbook -e @security.yml mantl.yml

If you customized variables with -e when building your 1.1 cluster, you will likely want to include the same variableswhen running the 1.2 version of the playbook. For example:

ansible-playbook -e @security.yml -e consul_dc=mydc mantl.yml

6.4 Upgrading from 1.0.3 to 1.1

If you have a running 1.0.3 cluster, you need to perform the following steps:

6.4.1 Update security.yml

Mantl 1.0 requires some additional settings in the security.yml file that you generated when you built your cluster.To auto-generate the necessary settings, you simply need to re-run security-setup:

./security-setup

Of course, if you customized your security settings (manually or using the CLI arguments), you should be careful tore-run security-setup the same way.

The main change was a switch to using a single certificate for internal nginx proxies.

6.4.2 Core Component Rolling Upgrade

ansible-playbook -e @security.yml playbooks/upgrade-1.0.3.yml

This playbook performs a rolling update of several core components including consul, nginx-consul based services,and mantl-dns. Due to compatibility issues, we also disable the collectd Docker plugin.

6.4.3 Upgrade to Mantl 1.1

At this point, you can now upgrade the rest of the components to 1.1 with the standard provisioning playbook:

ansible-playbook -e @security.yml mantl.yml

If you already have a pre-1.1 mantl.yml, you will want to incorporate the 1.1 changes (see sample.yml). Also, ifyou customized variables with -e when building your 1.0.3 cluster, you will likely want to include the same variableswhen running the 1.1 version of the playbook. For example:

ansible-playbook -e @security.yml -e consul_dc=mydc mantl.yml

6.5 Upgrading from 0.5.1 to 1.0

If you have a running 0.5.1 cluster, you need to perform the following steps:

120 Chapter 6. Upgrading

Page 125: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

6.5.1 Update security.yml

Mantl 1.0 requires some additional settings in the security.yml file that you generated when you built your cluster.To auto-generate the necessary settings, you simply need to re-run security-setup:

./security-setup

Of course, if you customized your security settings (manually or using the CLI arguments), you should be careful tore-run security-setup the same way.

For your reference, the following settings have been added:

• consul_acl_marathon_token

• consul_acl_secure_token

• consul_dns_domain

6.5.2 A note on consul_dns_domain

Prior to 1.0, the ansible consul_dns_domain variable was defined in a number of different playbooks. It isnow included in security.yml and can be customized from a single location. This simplifies the configurationand reduces the likelihood of mistakes. If you are working with a customized mantl.yml file, you should re-move all consul_dns_domain definitions from it and ensure consul_dns_domain is set as desired in yoursecurity.yml.

6.5.3 Upgrade Distributive, Consul, Mesos, and Marathon

ansible-playbook -e @security.yml playbooks/upgrade-0.5.1.yml

This playbook performs a Distributive upgrade and includes a couple of other playbooks that perform a rolling upgradeof Consul, Mesos, and Marathon.

6.5.4 Upgrade to Mantl 1.0

At this point, you can now upgrade the rest of the components to 1.0 with the standard provisioning playbook:

ansible-playbook -e @security.yml mantl.yml

6.6 Upgrading from 1.1 to 1.2

Mantl 1.2 removed the consul_dns_domain variable. Services are reachable via<service-name>.service.consul and nodes via <hostname>.node.consul, instead of<service-name>.service.<consul-dns-domain> and <hostname>.node.<consul-dns-domain>respectively.

6.6. Upgrading from 1.1 to 1.2 121

Page 126: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

122 Chapter 6. Upgrading

Page 127: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

CHAPTER 7

Packer

The Mantl Vagrant image is built using Packer1. To build it for yourself, run packer buildpacker/vagrant.json. If you want to build with Atlas2, use packer push packer/vagrant.json.

The image is created using the existing Ansible playbooks, but run in a limited mode (specifically, with only taskstagged bootstrap.) Aside from Ansible, there are a number of shell scripts that are run. Here’s what they do:

7.1 ansible.sh

Installs Ansible from EPEL

7.2 vagrant.sh

Downloads the default insecure public key from the Vagrant Github repostory3 to allow the vagrant user to log in.

7.3 vbox.sh

Installs the VirtualBox Guest Additions4 so that folder syncing can work inside Vagrant.

7.4 cleanup.sh

Performs cleanup tasks after installation is complete to limit image size when distributed. Specifically:

• Remove Ansible and cached yum information

• Remove persistent network information

• Remove temporary files, including /tmp/* and files under the home directory and log directories.

• Zero out all empty disk space and sync

1https://www.packer.io2https://atlas.hashicorp.com3https://github.com/mitchellh/vagrant4http://www.virtualbox.org/manual/ch04.html

123

Page 128: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

124 Chapter 7. Packer

Page 129: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

CHAPTER 8

FAQs

8.1 What is the relationship between Mantl and OpenStack Magnum?

Mantl and Magnum are currently not integrated. However, the projects could complement one another. Magnumprovides an OpenStack API to instantiate a containerized environment within an OpenStack cloud. Magnum supportsa range of container clustering implementations and Operating System distributions. Please refer to the Magnum wiki1

for additional Magnum details.

Mantl is an end-to-end solution for deploying and managing a microservices infrastructure. Mantl hosts are pro-visioned to OpenStack and other supported environments using Terraform2. Terraform configuration files manageOpenStack services such as compute, block storage, networking, etc. required to instantiate a Mantl host to an Open-Stack cloud. The Terraform OpenStack Provider3 would need to be updated since it does not support Magnum. If/whenthis is accomplished, adding Magnum support to Mantl should be straightforward.

8.2 Can I use Mantl with Kubernetes?

Kubernetes is an open source orchestration system for Docker containers. It handles scheduling onto nodes in acompute cluster and actively manages workloads to ensure that their state matches the users declared intentions.Using the concepts of “labels” and “pods”, it groups the containers which make up an application into logical units formanagement and discovery.

Mantl has integrated both Apache Mesos and Kubernetes into it’s container stack. This integration provides users thefreedom to choose the best scheduler for their workloads promoting greater flexibility and choice.

8.3 Containers are great for running stateless applications but whatabout data/stateful services?

The container ecosystem is moving quickly, and durable persistent storage is one area that has received consistentattention. Mantl currently supports GlusterFS as an addon4 for shared persistent storage. Even without this software,there are databases and patterns that can provide reliable and consistent data for various use cases. For example, it ispossible to run MongoDB, Redis, or Cassandra in a way that provides a consistent distributed quorum.

1https://wiki.openstack.org/wiki/Magnum2https://www.terraform.io/3https://www.terraform.io/docs/providers/openstack/index.html4http://docs.mantl.io/en/latest/components/glusterfs.html

125

Page 130: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

126 Chapter 8. FAQs

Page 131: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

CHAPTER 9

Licenses

Mantl is licensed under the Apache License, Version 2.0, the text of which is reproduced here:

Apache LicenseVersion 2.0, January 2004

http://www.apache.org/licenses/

TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

1. Definitions.

"License" shall mean the terms and conditions for use, reproduction,and distribution as defined by Sections 1 through 9 of this document.

"Licensor" shall mean the copyright owner or entity authorized bythe copyright owner that is granting the License.

"Legal Entity" shall mean the union of the acting entity and allother entities that control, are controlled by, or are under commoncontrol with that entity. For the purposes of this definition,"control" means (i) the power, direct or indirect, to cause thedirection or management of such entity, whether by contract orotherwise, or (ii) ownership of fifty percent (50%) or more of theoutstanding shares, or (iii) beneficial ownership of such entity.

"You" (or "Your") shall mean an individual or Legal Entityexercising permissions granted by this License.

"Source" form shall mean the preferred form for making modifications,including but not limited to software source code, documentationsource, and configuration files.

"Object" form shall mean any form resulting from mechanicaltransformation or translation of a Source form, including butnot limited to compiled object code, generated documentation,and conversions to other media types.

"Work" shall mean the work of authorship, whether in Source orObject form, made available under the License, as indicated by acopyright notice that is included in or attached to the work(an example is provided in the Appendix below).

"Derivative Works" shall mean any work, whether in Source or Objectform, that is based on (or derived from) the Work and for which the

127

Page 132: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

editorial revisions, annotations, elaborations, or other modificationsrepresent, as a whole, an original work of authorship. For the purposesof this License, Derivative Works shall not include works that remainseparable from, or merely link (or bind by name) to the interfaces of,the Work and Derivative Works thereof.

"Contribution" shall mean any work of authorship, includingthe original version of the Work and any modifications or additionsto that Work or Derivative Works thereof, that is intentionallysubmitted to Licensor for inclusion in the Work by the copyright owneror by an individual or Legal Entity authorized to submit on behalf ofthe copyright owner. For the purposes of this definition, "submitted"means any form of electronic, verbal, or written communication sentto the Licensor or its representatives, including but not limited tocommunication on electronic mailing lists, source code control systems,and issue tracking systems that are managed by, or on behalf of, theLicensor for the purpose of discussing and improving the Work, butexcluding communication that is conspicuously marked or otherwisedesignated in writing by the copyright owner as "Not a Contribution."

"Contributor" shall mean Licensor and any individual or Legal Entityon behalf of whom a Contribution has been received by Licensor andsubsequently incorporated within the Work.

2. Grant of Copyright License. Subject to the terms and conditions ofthis License, each Contributor hereby grants to You a perpetual,worldwide, non-exclusive, no-charge, royalty-free, irrevocablecopyright license to reproduce, prepare Derivative Works of,publicly display, publicly perform, sublicense, and distribute theWork and such Derivative Works in Source or Object form.

3. Grant of Patent License. Subject to the terms and conditions ofthis License, each Contributor hereby grants to You a perpetual,worldwide, non-exclusive, no-charge, royalty-free, irrevocable(except as stated in this section) patent license to make, have made,use, offer to sell, sell, import, and otherwise transfer the Work,where such license applies only to those patent claims licensableby such Contributor that are necessarily infringed by theirContribution(s) alone or by combination of their Contribution(s)with the Work to which such Contribution(s) was submitted. If Youinstitute patent litigation against any entity (including across-claim or counterclaim in a lawsuit) alleging that the Workor a Contribution incorporated within the Work constitutes director contributory patent infringement, then any patent licensesgranted to You under this License for that Work shall terminateas of the date such litigation is filed.

4. Redistribution. You may reproduce and distribute copies of theWork or Derivative Works thereof in any medium, with or withoutmodifications, and in Source or Object form, provided that Youmeet the following conditions:

(a) You must give any other recipients of the Work orDerivative Works a copy of this License; and

(b) You must cause any modified files to carry prominent noticesstating that You changed the files; and

128 Chapter 9. Licenses

Page 133: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

(c) You must retain, in the Source form of any Derivative Worksthat You distribute, all copyright, patent, trademark, andattribution notices from the Source form of the Work,excluding those notices that do not pertain to any part ofthe Derivative Works; and

(d) If the Work includes a "NOTICE" text file as part of itsdistribution, then any Derivative Works that You distribute mustinclude a readable copy of the attribution notices containedwithin such NOTICE file, excluding those notices that do notpertain to any part of the Derivative Works, in at least oneof the following places: within a NOTICE text file distributedas part of the Derivative Works; within the Source form ordocumentation, if provided along with the Derivative Works; or,within a display generated by the Derivative Works, if andwherever such third-party notices normally appear. The contentsof the NOTICE file are for informational purposes only anddo not modify the License. You may add Your own attributionnotices within Derivative Works that You distribute, alongsideor as an addendum to the NOTICE text from the Work, providedthat such additional attribution notices cannot be construedas modifying the License.

You may add Your own copyright statement to Your modifications andmay provide additional or different license terms and conditionsfor use, reproduction, or distribution of Your modifications, orfor any such Derivative Works as a whole, provided Your use,reproduction, and distribution of the Work otherwise complies withthe conditions stated in this License.

5. Submission of Contributions. Unless You explicitly state otherwise,any Contribution intentionally submitted for inclusion in the Workby You to the Licensor shall be under the terms and conditions ofthis License, without any additional terms or conditions.Notwithstanding the above, nothing herein shall supersede or modifythe terms of any separate license agreement you may have executedwith Licensor regarding such Contributions.

6. Trademarks. This License does not grant permission to use the tradenames, trademarks, service marks, or product names of the Licensor,except as required for reasonable and customary use in describing theorigin of the Work and reproducing the content of the NOTICE file.

7. Disclaimer of Warranty. Unless required by applicable law oragreed to in writing, Licensor provides the Work (and eachContributor provides its Contributions) on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express orimplied, including, without limitation, any warranties or conditionsof TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR APARTICULAR PURPOSE. You are solely responsible for determining theappropriateness of using or redistributing the Work and assume anyrisks associated with Your exercise of permissions under this License.

8. Limitation of Liability. In no event and under no legal theory,whether in tort (including negligence), contract, or otherwise,unless required by applicable law (such as deliberate and grosslynegligent acts) or agreed to in writing, shall any Contributor beliable to You for damages, including any direct, indirect, special,

129

Page 134: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

incidental, or consequential damages of any character arising as aresult of this License or out of the use or inability to use theWork (including but not limited to damages for loss of goodwill,work stoppage, computer failure or malfunction, or any and allother commercial damages or losses), even if such Contributorhas been advised of the possibility of such damages.

9. Accepting Warranty or Additional Liability. While redistributingthe Work or Derivative Works thereof, You may choose to offer,and charge a fee for, acceptance of support, warranty, indemnity,or other liability obligations and/or rights consistent with thisLicense. However, in accepting such obligations, You may act onlyon Your own behalf and on Your sole responsibility, not on behalfof any other Contributor, and only if You agree to indemnify,defend, and hold each Contributor harmless for any liabilityincurred by, or claims asserted against, such Contributor by reasonof your accepting any such warranty or additional liability.

END OF TERMS AND CONDITIONS

APPENDIX: How to apply the Apache License to your work.

To apply the Apache License to your work, attach the followingboilerplate notice, with the fields enclosed by brackets "{}"replaced with your own identifying information. (Don't includethe brackets!) The text should be enclosed in the appropriatecomment syntax for the file format. We also recommend that afile or class name and description of purpose be included on thesame "printed page" as the copyright notice for easieridentification within third-party archives.

Copyright 2015 Cisco Systems, Inc.

Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.

9.1 Listing of Licenses

The following is a list of the licenses of most included software.

addons, examples, library: Apache 2.01

mi-deploy: Apache 2.02

Packer: MPL 2.03

1https://github.com/CiscoCloud/mantl/blob/master/LICENSE2https://github.com/CiscoCloud/mantl/blob/master/LICENSE3https://github.com/mitchellh/packer/blob/master/LICENSE

130 Chapter 9. Licenses

Page 135: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

playbooks: Apache 2.04

plugins/callbacks/profile_tasks.py: MIT5

plugins/inventory/terraform.py: Apache 2.06

roles/calico: Apache 2.07

roles/chronos: Apache 2.08

roles/collectd: MIT9

roles/common: MIT10

roles/ consul-template‘MPL 2.0 <https://github.com/hashicorp/consul-template/blob/master/LICENSE>‘__

roles/consul: MLP 2.011

roles/dnsmasq: Apache 2.012

roles/docker: Apache 2.013

roles/etcd: Apache 2.014

roles/glusterfs: GNU GPL 3.015

roles/handlers: Apache 2.016

roles/haproxy: Haproxy17

roles/kubernetes, k8’s: Apache 2.018

roles/logrotate: MIT19

roles/logstash: Apache 2.020

roles/lvm: license: Apache 2.021

roles/mantlui: Apache 2.022

roles/marathon: Apache 2.023

roles/mesos: Apache 2.024

roles/nginx: MIT25

4https://github.com/CiscoCloud/mantl/blob/master/LICENSE5https://github.com/CiscoCloud/mantl/blob/master/plugins/callbacks/profile_tasks.py6https://github.com/CiscoCloud/mantl/blob/master/plugins/inventory/terraform.py7https://github.com/projectcalico/calico/blob/master/LICENSE8https://github.com/mesos/chronos/blob/master/LICENSE9https://collectd.org/wiki/index.php/Category:MIT_License

10https://github.com/sunscrapers/ansible-role-common/blob/master/LICENSE11https://github.com/hashicorp/consul/blob/master/LICENSE12https://github.com/vmware/ansible-role-dnsmasq/blob/master/LICENSE13https://github.com/docker/docker/blob/master/LICENSE14https://github.com/coreos/etcd/blob/master/LICENSE15http://www.gluster.org/community/documentation/index.php/Gluster_3.2:_Understanding_the_GlusterFS_License16https://github.com/CiscoCloud/mantl/blob/master/LICENSE17http://www.haproxy.org/download/1.3/doc/LICENSE18https://github.com/vmware/ansible-role-kubernetes-master/blob/master/LICENSE19https://github.com/retr0h/ansible-logrotate/blob/master/LICENSE20https://github.com/elastic/logstash/blob/master/LICENSE21https://github.com/CiscoCloud/mantl/blob/master/LICENSE22http://docs.mantl.io/en/latest/license.html23https://github.com/mesosphere/marathon/blob/master/LICENSE24https://github.com/apache/mesos/blob/master/LICENSE25https://github.com/ANXS/nginx

9.1. Listing of Licenses 131

Page 136: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

roles/traefik: MIT26

roles/vault: MLP 2.027

roles/zookeeper: Apache 2.028

terraform: MLP 2.029

Vagrant: MIT30

security-setup: Apache 2.031

kubernetes: Apache 2.032

• Changelog33

26https://github.com/containous/traefik/blob/master/LICENSE.md27https://github.com/hashicorp/vault/blob/master/LICENSE28https://github.com/apache/zookeeper/blob/trunk/LICENSE.txt29https://github.com/hashicorp/terraform/blob/master/LICENSE30https://github.com/mitchellh/vagrant/blob/master/LICENSE31https://github.com/CiscoCloud/mantl/blob/master/LICENSE32https://github.com/kubernetes/kubernetes/blob/master/LICENSE33https://github.com/CiscoCloud/mantl/blob/master/CHANGELOG.md

132 Chapter 9. Licenses

Page 137: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

CHAPTER 10

License

Copyright © 2015 Cisco Systems, Inc.

Licensed under the Apache License, Version 2.01 (the “License”).

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an“AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See theLicense for the specific language governing permissions and limitations under the License.

This product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit(http://www.openssl.org/)

1http://www.apache.org/licenses/LICENSE-2.0

133

Page 138: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

134 Chapter 10. License

Page 139: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Index

Ccalico_network (built-in variable), 81calico_profile (built-in variable), 81checklist_versions (built-in variable), 87chronos_cassandra_port (built-in variable), 83chronos_cassandra_ttl (built-in variable), 83chronos_cpus (built-in variable), 83chronos_decline_offer_duration (built-in variable), 83chronos_disable_after_failures (built-in variable), 83chronos_failover_timeout (built-in variable), 83chronos_failure_retry (built-in variable), 83chronos_framework_name (built-in variable), 83chronos_graphite_reporting_interval (built-in variable),

83chronos_hostname (built-in variable), 84chronos_id (built-in variable), 84chronos_instances (built-in variable), 83chronos_mem (built-in variable), 84chronos_mesos_task_cpu (built-in variable), 84chronos_mesos_task_disk (built-in variable), 84chronos_mesos_task_mem (built-in variable), 84chronos_min_revive_offers_interval (built-in variable),

84chronos_reconciliation_interval (built-in variable), 84chronos_revive_offers_for_new_jobs (built-in variable),

84chronos_schedule_horizon (built-in variable), 84chronos_task_epsilon (built-in variable), 84chronos_zk_timeout (built-in variable), 84consul_advertise (built-in variable), 86consul_bootstrap_expect (built-in variable), 86consul_ca_file (built-in variable), 87consul_cert_file (built-in variable), 87consul_dc (built-in variable), 86consul_dc_group (built-in variable), 86consul_enable_tls (built-in variable), 87consul_gossip_key (built-in variable), 86consul_is_server (built-in variable), 86consul_key_file (built-in variable), 87consul_log_level (built-in variable), 86

consul_servers_group (built-in variable), 86control_subdomain (built-in variable), 19, 48, 67cpu (built-in variable), 85

Ddf (built-in variable), 85disk (built-in variable), 85distributive_interval (built-in variable), 87distributive_timeout (built-in variable), 87docker_source (built-in variable), 88docker_tcp (built-in variable), 88docker_tcp_tls (built-in variable), 88domain (built-in variable), 19, 48, 67

Eelasticsearch_client_client_port (built-in variable), 92elasticsearch_client_cpu (built-in variable), 93elasticsearch_client_elasticsearch_service (built-in vari-

able), 92elasticsearch_client_id (built-in variable), 92elasticsearch_client_java_opts (built-in variable), 93elasticsearch_client_ram (built-in variable), 93elasticsearch_client_service (built-in variable), 92elasticsearch_client_transport_port (built-in variable), 93elasticsearch_cluster_name (built-in variable), 92elasticsearch_cpu (built-in variable), 91elasticsearch_disk (built-in variable), 91elasticsearch_executor_cpu (built-in variable), 92elasticsearch_executor_name (built-in variable), 92elasticsearch_executor_ram (built-in variable), 91elasticsearch_framework_name (built-in variable), 92elasticsearch_framework_ui_port (built-in variable), 92elasticsearch_framework_version (built-in variable), 92elasticsearch_java_opts (built-in variable), 91elasticsearch_nodes (built-in variable), 92elasticsearch_ram (built-in variable), 91elasticsearch_service (built-in variable), 92environment variable

AWS_ACCESS_KEY_ID, 24, 72AWS_DEFAULT_REGION, 25, 73

135

Page 140: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

AWS_SECRET_ACCESS_KEY, 24, 72CLC_ACCOUNT, 36CLC_PASSWORD, 36CLC_USERNAME, 36CLOUDFLARE_EMAIL, 15, 44, 62CLOUDFLARE_TOKEN, 15, 44, 62DIGITALOCEAN_TOKEN, 31DNSIMPLE_EMAIL, 15, 44, 63DNSIMPLE_TOKEN, 15, 44, 63GOOGLE_PROJECT, 11, 59GOOGLE_REGION, 11, 59OS_PASSWORD, 9, 57OS_USERNAME, 9, 57SOFTLAYER_API_KEY, 35SOFTLAYER_USERNAME, 35VSPHERE_PASSWORD, 32VSPHERE_USER, 32

etcd_client_port (built-in variable), 81, 95etcd_peer_port (built-in variable), 95

Ggluserfs_volumes (built-in variable), 96glusterfs_brick_device (built-in variable), 96glusterfs_brick_location (built-in variable), 96glusterfs_brick_mount (built-in variable), 96glusterfs_mode (built-in variable), 96glusterfs_replication (built-in variable), 96glusterfs_server_group (built-in variable), 96glusterfs_version (built-in variable), 96glusterfs_volume_force (built-in variable), 96

Hhost_domain (built-in variable), 108Hostname (built-in variable), 85

Iinterface (built-in variable), 85Interval (built-in variable), 85

Kkafka_broker_count (built-in variable), 98kafka_broker_cpu (built-in variable), 98kafka_broker_heap (built-in variable), 98kafka_broker_jvm_options (built-in variable), 99kafka_broker_mem (built-in variable), 98kafka_broker_options (built-in variable), 98kafka_broker_port (built-in variable), 98kafka_manager_cpu (built-in variable), 100kafka_manager_id (built-in variable), 99kafka_manager_install (built-in variable), 99kafka_manager_instances (built-in variable), 100kafka_manager_load_balancer (built-in variable), 100kafka_manager_mem (built-in variable), 100

kafka_manager_service_name (built-in variable), 99kafka_scheduler_cpu (built-in variable), 98kafka_scheduler_mem (built-in variable), 98kafka_scheduler_name (built-in variable), 98kafka_service_name (built-in variable), 98kibana_cpu (built-in variable), 93kibana_elasticsearch_service (built-in variable), 93kibana_id (built-in variable), 93kibana_image (built-in variable), 93kibana_instances (built-in variable), 93kibana_mesos_elasticsearch_service (built-in variable),

94kibana_mesos_executor_cpu (built-in variable), 94kibana_mesos_executor_ram (built-in variable), 94kibana_mesos_framework_name (built-in variable), 94kibana_mesos_id (built-in variable), 94kibana_mesos_image (built-in variable), 94kibana_mesos_instances (built-in variable), 94kibana_mesos_kibana_service (built-in variable), 94kibana_mesos_scheduler_cpu (built-in variable), 94kibana_mesos_scheduler_ram (built-in variable), 94kibana_mesos_service (built-in variable), 94kibana_package (built-in variable), 93kibana_ram (built-in variable), 93kibana_service (built-in variable), 93

Lload (built-in variable), 85logstash_input_log4j (built-in variable), 102logstash_output_elasticsearch (built-in variable), 101logstash_output_kafka (built-in variable), 102logstash_output_stdout (built-in variable), 101logstsh_log4j_port (built-in variable), 102

Mmantl_api_config_file (built-in variable), 103mantl_api_config_url (built-in variable), 103mantl_api_image (built-in variable), 103mantl_api_image_tag (built-in variable), 103mantl_dns_version (built-in variable), 88marathon_consul_image (built-in variable), 103marathon_consul_image_tag (built-in variable), 103marathon_http_credentials (built-in variable), 102marathon_keystore_password (built-in variable), 102marathon_keystore_path (built-in variable), 102marathon_logging_level (built-in variable), 103marathon_principal (built-in variable), 102marathon_secret (built-in variable), 103memory (built-in variable), 85mesos_attributes (built-in variable), 105mesos_authenticate_followers (built-in variable), 106mesos_authenticate_frameworks (built-in variable), 106mesos_cluster (built-in variable), 105mesos_consul_image (built-in variable), 103

136 Index

Page 141: Mantl DocumentationCHAPTER 1 Getting Started Note: This document assumes you have aworking Ansible installation1.If you don’t, install Ansible before continu-ing. This can be done

Mantl Documentation, Release 1.0.3

mesos_consul_image_tag (built-in variable), 103mesos_credentials (built-in variable), 106mesos_follower_cmd (built-in variable), 105mesos_follower_port (built-in variable), 105mesos_follower_principal (built-in variable), 106mesos_follower_secret (built-in variable), 106mesos_isolation (built-in variable), 105mesos_leader_cmd (built-in variable), 105mesos_leader_port (built-in variable), 105mesos_log_dir (built-in variable), 105mesos_logging_level (built-in variable), 106mesos_mode (built-in variable), 104mesos_resources (built-in variable), 105mesos_work_dir (built-in variable), 105mesos_zk_chroot (built-in variable), 105mesos_zk_dns (built-in variable), 105mesos_zk_hosts (built-in variable), 105mesos_zk_port (built-in variable), 105

Nnetwork (built-in variable), 42, 85

Pplaybooks (built-in variable), 42processes (built-in variable), 85

Rretry_join (built-in variable), 87

Sshort_name (built-in variable), 19, 48, 67subdomain (built-in variable), 19, 48, 67swap (built-in variable), 85syslog (built-in variable), 85

Ttraefik_marathon_domain (built-in variable), 107traefik_marathon_endpoint (built-in variable), 106traefik_marathon_expose_by_default (built-in variable),

107

Uuptime (built-in variable), 85use_host_domain (built-in variable), 108users (built-in variable), 85

Vvault_command_options (built-in variable), 109vault_default_port (built-in variable), 109vault_init_json (built-in variable), 109

Zzk_id (built-in variable), 107

zookeeper_container_name (built-in variable), 107zookeeper_data_volume (built-in variable), 107zookeeper_docker_env (built-in variable), 107zookeeper_docker_image (built-in variable), 107zookeeper_docker_ports (built-in variable), 107zookeeper_docker_tag (built-in variable), 107zookeeper_ensemble (built-in variable), 107zookeeper_env (built-in variable), 107zookeeper_log_purge_interval (built-in variable), 108zookeeper_log_retain_count (built-in variable), 108zookeeper_log_threshold (built-in variable), 108zookeeper_service (built-in variable), 107

Index 137